Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Boost Workshop at OOPSLA 2004

7 views
Skip to first unread message

Jeremy Siek

unread,
Aug 9, 2004, 4:49:32 PM8/9/04
to
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf


Submissions

Each participant will be expected to develop a position paper
describing a particular library or category of libraries that is
lacking in the current C++ standard library and Boost. The participant
should explain why the library or libraries would advance the state of
C++ programming. Ideally, the paper should sketch the proposed library
interface and concepts. This will be a unique opportunity to critique
and review library proposals. Alternatively, a participant might
describe the strengths and weaknesses of existing libraries and how
they might be modified to fill the need.

Form of Submissions

Submissions should consist of a 3-10 page paper that gives at least
the motivation for and an informal description of the proposal. This
may be augmented by source or other documentation of the proposed
libraries, if available. Preferred form of submission is a PDF file.

Important Dates

• Submission deadline for early registration: September 10, 2004
• Early Notification of selection: September 15, 2004
• OOPSLA early registration deadline: September 16, 2004
• OOPSLA conference: October 24-28, 2004

Contact committee oopsl...@crystalclearsoftware.com

Program Committee
Jeff Garland
Nicolai Josuttis
Kevlin Henney
Jeremy Siek

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Andrei Alexandrescu (See Website for Email)

unread,
Aug 10, 2004, 9:26:44 PM8/10/04
to
"Jeremy Siek" <jerem...@gmail.com> wrote in message
news:21925601.04080...@posting.google.com...

> CALL FOR PAPERS/PARTICIPATION
>
> C++, Boost, and the Future of C++ Libraries
> Workshop at OOPSLA
> October 24-28, 2004
> Vancouver, British Columbia, Canada
> http://tinyurl.com/4n5pf
[snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.

Today Boost uses a "preprocessor library", which in turn (please correct me
if my understanding is wrong) relies on a program to generate some many big
macros up to a fixed "maximum" to overcome preprocessor's incapability to
deal with variable number of arguments.

Also, please correct me if I'm wrong (because I haven't really looked deep
into it), but my understanding is that people around Boost see the PP
library as a necessary but unpleasantly-smelling beast that makes things
around it smelly as well. [Reminds me of the Romanian story: there was a guy
called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in which he
can hang whatever he wanted. Now when the rich man was having guests and
whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
course in the end the rich man got so exasperated that he gave Pepelea the
house back for free. Ever since that story, "Pepelea's nail" is referred to
as something like... like what the preprocessor is to the C++ language.]

That would be reason one to create a new C++ preprocessor. (And when I say
"new," that's not like in "yet another standard C++ preprocessor". I have
been happy to see my suggestion on the Boost mailing list followed in that
the WAVE preprocessor was built using Boost's own parser generator library,
Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
aimed at displacing the existing preprocessor forever and replacing it with
a better one".

If backed by the large Boost community, the new preprocessor could easily
gain popularity and be used in new projects instead of the old one. To avoid
inheriting past's mistakes, the new preprocessor doesn't need to be
syntax-compatible in any way with the old preprocessor, but only
functionally compatible, in that it can do all that can be done with the
existing preprocessor, only that it has new means to do things safer and
better.

I think that would be great. Because it we all stop coding for a second and
think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it? Maybe namespaces that are so badly designed,
you'd think they are inherited from C? I'd say they are good contenders
against each other, but none of them holds a candle to the preprocessor.

So, a proposal for a new preprocessor would be great. Here's a short wish
list:

* Does what the existing one does (although some of those coding patterns
will be unrecommended);

* Supports one-time file inclusion and multiple file inclusion, without the
need for guards (yes, there are subtle issues related to that... let's at
least handle a well-defined subset of the cases);

* Allows defining "hygienic" macros - macros that expand to the same text
independent on the context in which they are expanded;

* Allows defining scoped macros - macros visible only within the current
scope;

* Has recursion and possibly iteration;

* Has a simple, clear expansion model (negative examples abound - NOT like
m4, NOT like tex... :o))

* Supports variable number of arguments. I won't venture into thinking of
more cool support a la scheme or dylan ofr Java Extender macros.


Andrei

P.M.

unread,
Aug 11, 2004, 3:46:09 PM8/11/04
to
"Andrei Alexandrescu \(See Website for Email\)"

> replacement of C++'s preprocessor...

If you're going to build a better text-substitution layer, or even a
true Lisp-ish macro system (although one which was restricted to
compile-time), why not go further and cleanup more of the language via
this new parser? How about embracing and simplifying template
meta-programming with a better template system that is a true
compile-time functional language in its own right with unlimited
recursion, clear error messages, etc. In fact, it may be possible to
unify both this new uber macro system and the template
expansion/met-programming syntax.

* C++ improvements: Dare to dream, but wear asbestos underpants just
in case.

David Abrahams

unread,
Aug 11, 2004, 4:19:01 PM8/11/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

> "Jeremy Siek" <jerem...@gmail.com> wrote in message
> news:21925601.04080...@posting.google.com...
> > CALL FOR PAPERS/PARTICIPATION
> >
> > C++, Boost, and the Future of C++ Libraries
> > Workshop at OOPSLA
> > October 24-28, 2004
> > Vancouver, British Columbia, Canada
> > http://tinyurl.com/4n5pf
> [snip]
>
> I wonder if the submitters follow this post's trail or I should email
> them... anyway, here goes.
>
> I am not sure if I'll ever get around to writing it, so I said I'd post the
> idea here, maybe someone will pursue it. In short, I think a proposal for a
> replacement of C++'s preprocessor would be, I think, welcome.

Hard to see how this is going to be about C++ libraries, but I'll
follow along.

> Today Boost uses a "preprocessor library", which in turn (please
> correct me if my understanding is wrong) relies on a program to
> generate some many big macros up to a fixed "maximum" to overcome
> preprocessor's incapability to deal with variable number of
> arguments.

That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).

There is no way to "overcome" the PP's incapability to deal with
variable number of arguments other than by using PP data structures as
described in http://boost-consulting.com/mplbook/preprocessor.html to
pass multiple items as a single macro argument, or by extending the PP
to support variadic macros a la C99, as the committee is poised to do.

The PP library is _often_ used to overcome C++'s inability to support
typesafe function (template)s with variable numbers of arguments, by
writing PP programs that generate overloaded function (template)s.

> Also, please correct me if I'm wrong (because I haven't really
> looked deep into it), but my understanding is that people around
> Boost see the PP library as a necessary but unpleasantly-smelling
> beast that makes things around it smelly as well.

I don't see it that way, although I wish there were ways to avoid
using it in some of the more common cases (variadic template). Maybe
some others do see it like that.

> [Reminds me of the Romanian story: there was a guy called Pepelea
> (pronounced Peh-Peh-leah) who was poor but had inherited a beautiful
> house. A rich man wanted to buy it, and Pepelea sold it on one
> condition: that Pepelea owns a nail in the living room's wall, in
> which he can hang whatever he wanted. Now when the rich man was
> having guests and whatnot, Pepelea would drop by and embarraisingly
> hang a dirty old coat. Of course in the end the rich man got so
> exasperated that he gave Pepelea the house back for free. Ever since
> that story, "Pepelea's nail" is referred to as something
> like... like what the preprocessor is to the C++ language.]

cute.

> That would be reason one to create a new C++ preprocessor. (And when
> I say "new," that's not like in "yet another standard C++
> preprocessor". I have been happy to see my suggestion on the Boost
> mailing list followed in that the WAVE preprocessor was built using
> Boost's own parser generator library, Spirit.) What I am talking now
> is "a backwards-INcompatible C++ preprocessor aimed at displacing
> the existing preprocessor forever and replacing it with a better
> one".

Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.

> If backed by the large Boost community, the new preprocessor could
> easily gain popularity and be used in new projects instead of the
> old one.

I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.

> To avoid inheriting past's mistakes, the new preprocessor
> doesn't need to be syntax-compatible in any way with the old
> preprocessor, but only functionally compatible, in that it can do
> all that can be done with the existing preprocessor, only that it
> has new means to do things safer and better.

I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.

> I think that would be great. Because it we all stop coding for a
> second and think of it, what's the ugliest scar on C++'s face - what
> is Pepelea's nail? Maybe "export" which is so broken and so useless
> and so abusive that its implementers have developed Stockholm
> syndrome during the long years that took them to implement it?

That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.

> Maybe namespaces that are so badly designed, you'd think they are
> inherited from C?

Wow, I'm impressed; that's going to piss off both the hardcore C _and_
C++ people!

I've never seen a serious proposal for better namespaces, other than
http://boost-consulting.com/writing/qn.html, which seems to have been
generally ignored. Have you got any ideas?

> I'd say they are good contenders against each other, but none of
> them holds a candle to the preprocessor.
>
> So, a proposal for a new preprocessor would be great.

If that's your point, I think it's an interesting one, but somehow I
still don't get how it could be appropriate for a workshop on C++
libraries.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

Andrei Alexandrescu (See Website for Email)

unread,
Aug 12, 2004, 8:05:22 AM8/12/04
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uzn51r...@boost-consulting.com...

> "Andrei Alexandrescu \(See Website for Email\)"
> > Today Boost uses a "preprocessor library", which in turn (please
> > correct me if my understanding is wrong) relies on a program to
> > generate some many big macros up to a fixed "maximum" to overcome
> > preprocessor's incapability to deal with variable number of
> > arguments.
>
> That's a pretty jumbled understanding of the situation.
>
> The preprocessor library is a library of headers and macros that allow
> you to generate C/C++ code by writing programs built out of macro
> invocations. You can see the sample appendix at
> http://www.boost-consulting.com/mplbook for a reasonably gentle
> introduction.

Ok, it's a half jumbled understanding of the situ, coupled with a half
jumbled expression of my half-jumbled understanding :o).

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program. That program is
admittedly not part of the library as distributed (I believe it is part of
the maintenance process), but I subjectively consider it a witness that a
more elegant approach would be welcome.

Then I've looked again over the PP library (this time through the link
you've sent), and honestly it reminds me of TeX macro tricks more than any
example of elegant programming. As such, I'd find it hard to defend it with
a straight face, and I am frankly surprised you do. But then I understand
the practical utility, as you point out below.

> Bjarne's plan for that is to gradually make the capabilities of the
> existing PP redundant by introducing features in the core
> language... and then, finally, deprecate it.

It's hard to introduce the ability to define syntactic replacement (which
many people consider useful) in the core language.

> I doubt even with Boost backing that the community at large is likely
> to easily accept integrating another tool into its build processes.
> The big advantage of the C++ PP is that it's built-in... and that's
> one of the biggest reasons that the PP _lib_ is better for my purposes
> than any of the ad hoc code generators I've written/used in the past.

Practicality, and not elegance or suitability, is about the only reason that
I could agree with.

> I think Bjarne's approach is the best way to do that sort of
> replacement. As long as the PP's functionality is really being
> replaced by a textual preprocessor (or a token-wise one as we have
> today) it's going to suffer many of the same problems. Much of those
> jobs should be filled by a more robust metaprogramming system that's
> fully integrated into the language and not just a processing phase.

I think here we talk about different things. One path to pursue is indeed to
provide better means for template programming, and another is to provide
syntactic manipulation. To me, they are different and complementary
techniques.

> That's slander ;->. Export could be used to optimize template
> metaprograms, for example (compile the templates to executable code
> that does instantiations). It may not have been a good idea, but
> those who suffered through implementing it now think it has some
> potential utility.

Sure. Similarly, they discovered that the expensive air filters for the
space shuttle can be used (only) as coffee filters for the team on the
ground :o).

> > Maybe namespaces that are so badly designed, you'd think they are
> > inherited from C?
>
> Wow, I'm impressed; that's going to piss off both the hardcore C _and_
> C++ people!

Heh heh... I knew this is gonna be taken that way :o). What I meant was,
many shortcomings of C++ root in a need for compatibility with C. With
namespaces and export, there's no C to blame :o).

> I've never seen a serious proposal for better namespaces, other than
> http://boost-consulting.com/writing/qn.html, which seems to have been
> generally ignored. Have you got any ideas?

That's a good doc solidly motivated; I am sorry it does not get the
attention that it deserves.

> > So, a proposal for a new preprocessor would be great.
>
> If that's your point, I think it's an interesting one, but somehow I
> still don't get how it could be appropriate for a workshop on C++
> libraries.

Ok, I'll drop it. May I still bicker about it on the Usenet? :o)


Andrei

Paul Mensonides

unread,
Aug 12, 2004, 8:13:57 AM8/12/04
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uzn51r...@boost-consulting.com...

> > Today Boost uses a "preprocessor library", which in turn (please


> > correct me if my understanding is wrong) relies on a program to
> > generate some many big macros up to a fixed "maximum" to overcome
> > preprocessor's incapability to deal with variable number of
> > arguments.
>
> That's a pretty jumbled understanding of the situation.
>
> The preprocessor library is a library of headers and macros that allow
> you to generate C/C++ code by writing programs built out of macro
> invocations. You can see the sample appendix at
> http://www.boost-consulting.com/mplbook for a reasonably gentle
> introduction.
>
> In the preprocessor library's _implementation_, there are lots of
> boilerplate program-generated macros, but that's an implementation
> detail that's only needed because so many preprocessors are badly
> nonconforming. In fact, the library's maintainer, Paul Mensonides,
> has a _much_ more elegantly-implemented PP library
> (http://sourceforge.net/projects/chaos-pp/) that has almost no
> boilerplate, but it only works on a few compilers (GCC among them).

Yes, for example, it would be relatively easy to a construct a macro that would
(given enough memory) run for 10,000 years generating billions upon trillions of
results. Obviously that isn't useful; I'm merely pointing out that many of the
limits Andrei refers to above aren't really limits.

> There is no way to "overcome" the PP's incapability to deal with
> variable number of arguments other than by using PP data structures as
> described in http://boost-consulting.com/mplbook/preprocessor.html to
> pass multiple items as a single macro argument, or by extending the PP
> to support variadic macros a la C99, as the committee is poised to do.

Incidentally, variadics make for highly efficient data structures--basically
because they can be unrolled. Given variadics, it is possible to tell if there
is at least a certain number of elements in constant time. This allows unrolled
processing in batch.

> The PP library is _often_ used to overcome C++'s inability to support
> typesafe function (template)s with variable numbers of arguments, by
> writing PP programs that generate overloaded function (template)s.

Yes.

> > Also, please correct me if I'm wrong (because I haven't really
> > looked deep into it), but my understanding is that people around
> > Boost see the PP library as a necessary but unpleasantly-smelling
> > beast that makes things around it smelly as well.
>
> I don't see it that way, although I wish there were ways to avoid
> using it in some of the more common cases (variadic template). Maybe
> some others do see it like that.

In some ways, with Chaos more so than Boost PP, preprocessor-based code
generation is very elegant. It should be noted also that well-designed code
generation via the preprocessor typically yields more type-safe code that is
less error-prone and more maintainable than the alternatives.

> > That would be reason one to create a new C++ preprocessor. (And when
> > I say "new," that's not like in "yet another standard C++
> > preprocessor". I have been happy to see my suggestion on the Boost
> > mailing list followed in that the WAVE preprocessor was built using
> > Boost's own parser generator library, Spirit.) What I am talking now
> > is "a backwards-INcompatible C++ preprocessor aimed at displacing
> > the existing preprocessor forever and replacing it with a better
> > one".
>
> Bjarne's plan for that is to gradually make the capabilities of the
> existing PP redundant by introducing features in the core
> language... and then, finally, deprecate it.

> I doubt even with Boost backing that the community at large is likely


> to easily accept integrating another tool into its build processes.
> The big advantage of the C++ PP is that it's built-in... and that's
> one of the biggest reasons that the PP _lib_ is better for my purposes
> than any of the ad hoc code generators I've written/used in the past.
>
> > To avoid inheriting past's mistakes, the new preprocessor
> > doesn't need to be syntax-compatible in any way with the old
> > preprocessor, but only functionally compatible, in that it can do
> > all that can be done with the existing preprocessor, only that it
> > has new means to do things safer and better.

Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person. Chaos alone has
nearly two thousand *interface* (i.e. not including implementation) macros. The
extent is not so great in the pp-lib, but it is large nonetheless, and the
pp-lib is widely used even if not directly. However, there have been no cases
of name collisions that I am aware of--simply because the library follows simple
guidelines on naming conventions. The fact that users of Boost need not even be
aware of the preprocessor-generation used within Boost is a further testament of
the elegance of the solutions--even in spite of the limitations and hacks
imposed by non-conforming preprocessors.

Consider the recent CUJ with the Matlab article which has unprefixed,
non-all-caps macro definitions *on the cover of the magazine*. Though the code
of which that is a part may well be good overall and serve a useful function,
those macros are a simply bad coding--and nothing can prevent bad coding and
wanton disregard for the consequences of actions.

As far as multiple evaluation is concerned, that is a result of viewing a macro
as a function--which it is not. Macros expand to code--they have nothing
specifically to do with function calls or any other language abstraction. Even
today people recommend, for example, that macros that expand to statements (or
similar) should leave out the trailing semicolon so the result looks like a
normal function call. In general, that is a terrible strategy. Macro
invocations are not function calls, do not have the semantics of function calls,
and should not be intentionally made to *act* like function calls. The code
that a macro expands to is the functional result of that macro and should be
documented as such--not just what that code does.

> I think Bjarne's approach is the best way to do that sort of
> replacement. As long as the PP's functionality is really being
> replaced by a textual preprocessor (or a token-wise one as we have
> today) it's going to suffer many of the same problems. Much of those
> jobs should be filled by a more robust metaprogramming system that's
> fully integrated into the language and not just a processing phase.

This is a fundamental issue. It would indeed be great to have a more advanced
preprocessor capable of doing many of the things that Boost PP (or Chaos) is
designed to enable. However, there will *always* be a need to manipulate source
without the semantic attachment of the underlying language's syntactic and
semantic rules. In many cases those rules lead to generation code that is
significantly more obtuse than it actually needs to be because the restrictions
imposed by syntax are fundamentally at odds with the creation of that syntax (or
the equivalent semantic effect). If there was another metaprogramming layer in
the compilation process (which would be fine), the preprocessor would just be
used to generate that also--for the basic reason that the syntax of the
generated language just gets in the way.

The ability to manipulate the core language without that attachment is one of
the preprocessor's greatest strengths. It is also one of the preprocessor's
greatest weaknesses. Just like any other language feature, particularly in C
and C++, it must be used with care because just like any other language feature,
it can be easily abused. The preprocessor enables very elegant and good
solutions when used well.

> > I think that would be great. Because it we all stop coding for a
> > second and think of it, what's the ugliest scar on C++'s face

Without resorting to arbitrary rhetoric such as "macros are evil" what is ugly
about the preprocessor? Certain uses of the preprocessor have in the past
caused (and still cause) problems. However, labeling macros as ugly because
they can be misused is taking the easy way out. It represents a failure to
isolate and understand how those problems surface and how they should be avoided
through specific guidelines (instead of gross generalizations). This has been
happening (and is still ongoing) with the underlying language for some time.
You avoid pitfalls in languages like C and C++ through understanding.
Guidelines themselves are not truly effective unless they are merely reminders
of the reasoning behind the guidelines. Otherwise, they just lead to
brain-dead, in-the-box programming, and inhibit progress.

> > - what
> > is Pepelea's nail? Maybe "export" which is so broken and so useless
> > and so abusive that its implementers have developed Stockholm
> > syndrome during the long years that took them to implement it?
>
> That's slander ;->. Export could be used to optimize template
> metaprograms, for example (compile the templates to executable code
> that does instantiations). It may not have been a good idea, but
> those who suffered through implementing it now think it has some
> potential utility.

I agree.

Regards,
Paul Mensonides

Andrei Alexandrescu (See Website for Email)

unread,
Aug 13, 2004, 9:40:25 AM8/13/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
news:bvGdnd42scu...@comcast.com...

> In some ways, with Chaos more so than Boost PP, preprocessor-based code
> generation is very elegant. It should be noted also that well-designed
code
> generation via the preprocessor typically yields more type-safe code that
is
> less error-prone and more maintainable than the alternatives.

Would be interesting to see some examples of that around here. I would be
grateful if you posted some.

> Safer? In what way? Name clashes? Multiple evaluation?
>
> I have probably written more macros than any other person.

I think you mean "I have probably written more C++ macros than any other
person." That detail is important. I'm not one to claim having written lots
of macros in any language, and I apologize if the amendment above sounds
snooty. I just think it's reasonable to claim that the C++ preprocessor
compares very unfavorably with many other languages' means for syntactic
abstractions.

I totally agree with everything you wrote, but my point was, I believe,
misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
that. My post says what tantamounts to "The C++ preprocessor sucks". It
sucks because it is not a powerful-enough tool. That's why.

So let me restate my point. Macros are great. I love macros. Syntactic
abstractions have their place in any serious language, as you very nicely
point out. And yes, they are distinct from other means of abstraction. And
yes, they can be very useful, and shouldn't be banned just because they can
be misused.

(I'll make a parenthesis here that I think is important. I believe the worst
thing that the C/C++ preprocessor has ever done is to steer an entire huge
community away from the power of syntactic abstractions.)

So, to conclude, my point was that the preprocessor is too primitive a tool
for implementing syntactic abstractions with.

Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features do
you think would have made it easier for you and your library's clients?


Andrei

Paul Mensonides

unread,
Aug 13, 2004, 10:32:52 AM8/13/04
to
"Andrei Alexandrescu (See Website for Email)"
<SeeWebsit...@moderncppdesign.com> wrote in message
news:2nvhm1F...@uni-berlin.de...

> First, I've looked in my boost implementation to see things like
> BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
> BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
> in my post) is that such macros are generated by a program. That program is
> admittedly not part of the library as distributed (I believe it is part of
> the maintenance process), but I subjectively consider it a witness that a
> more elegant approach would be welcome.

Agreed, that implementation is junk and is the result of poor preprocessor
conformance.

> Then I've looked again over the PP library (this time through the link
> you've sent),

(I believe that the link Dave posted was to Chaos--which is distinct from Boost
Preprocessor.)

> and honestly it reminds me of TeX macro tricks more than any
> example of elegant programming. As such, I'd find it hard to defend it with
> a straight face, and I am frankly surprised you do. But then I understand
> the practical utility, as you point out below.

What it reminds you of is irrelevant. You know virtually nothing about how it
works--you've never taken the time. Without that understanding, you cannot
critique its elegance or lack thereof.

> > Bjarne's plan for that is to gradually make the capabilities of the
> > existing PP redundant by introducing features in the core
> > language... and then, finally, deprecate it.
>
> It's hard to introduce the ability to define syntactic replacement (which
> many people consider useful) in the core language.

I agree--but that doesn't mean that we can't take steps in that direction.

> > I doubt even with Boost backing that the community at large is likely
> > to easily accept integrating another tool into its build processes.
> > The big advantage of the C++ PP is that it's built-in... and that's
> > one of the biggest reasons that the PP _lib_ is better for my purposes
> > than any of the ad hoc code generators I've written/used in the past.
>
> Practicality, and not elegance or suitability, is about the only reason that
> I could agree with.

Once again, a quick glance is wholly insufficient. You have not taken the time
to learn the idioms involved. The solutions that Chaos uses *internally* are
indeed far more elegant than you realize. Likewise, the solutions that Chaos
(or Boost PP) engenders through client code is more elegant than you realize.
You simply don't know enough about it to weigh the pros and cons.

Regards,
Paul Mensonides

David Abrahams

unread,
Aug 13, 2004, 12:49:41 PM8/13/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:uzn51r...@boost-consulting.com...
> > "Andrei Alexandrescu \(See Website for Email\)"
> > > Today Boost uses a "preprocessor library", which in turn (please
> > > correct me if my understanding is wrong) relies on a program to
> > > generate some many big macros up to a fixed "maximum" to overcome
> > > preprocessor's incapability to deal with variable number of
> > > arguments.
> >
> > That's a pretty jumbled understanding of the situation.
> >
> > The preprocessor library is a library of headers and macros that allow
> > you to generate C/C++ code by writing programs built out of macro
> > invocations. You can see the sample appendix at
> > http://www.boost-consulting.com/mplbook for a reasonably gentle
> > introduction.
>
> Ok, it's a half jumbled understanding of the situ, coupled with a half
> jumbled expression of my half-jumbled understanding :o).
>
> First, I've looked in my boost implementation to see things like
> BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
> BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
> in my post) is that such macros are generated by a program.

Yes, but as I mentioned none of that is required in std C++.
http://sourceforge.net/projects/chaos-pp/ doesn't use any
program-generated macros.

> That program is admittedly not part of the library as distributed (I
> believe it is part of the maintenance process), but I subjectively
> consider it a witness that a more elegant approach would be welcome.

Yeah, I'd rather be using Chaos everywhere instead of the current
Boost PP lib. Too bad it isn't portable in real life.

> Then I've looked again over the PP library (this time through the link
> you've sent), and honestly it reminds me of TeX macro tricks more than any
> example of elegant programming.

Where are the similarities with TeX macro tricks?

> As such, I'd find it hard to defend it with a straight face, and I
> am frankly surprised you do.

You're surprised I defend the PP library based on the fact that it
reminds _you_ of TeX macros?

The PP lib provides me with an expressive programming system for code
generation using well-understood functional programming idioms. In
the domain of generating C++ from token fragments, it's hard to
imagine what more one could want other than some syntactic sugar and
scoping.

> But then I understand the practical utility, as you point out below.

> > Bjarne's plan for that is to gradually make the capabilities of the
> > existing PP redundant by introducing features in the core
> > language... and then, finally, deprecate it.
>
> It's hard to introduce the ability to define syntactic replacement
> (which many people consider useful) in the core language.

Right. I personally think the PP will always have a role. That
said, I think its role could be substantially reduced.

> > I doubt even with Boost backing that the community at large is likely
> > to easily accept integrating another tool into its build processes.
> > The big advantage of the C++ PP is that it's built-in... and that's
> > one of the biggest reasons that the PP _lib_ is better for my purposes
> > than any of the ad hoc code generators I've written/used in the past.
>
> Practicality, and not elegance or suitability, is about the only
> reason that I could agree with.

Practicality in this case is elegance. My users can adjust
code-generation parameters by putting -Dwhatever on their
command-line.

FWIW, I designed a sophisticated purpose-built C++ code generation
language using Python and eventually scrapped it. Ultimately the
programs I'd written were harder to understand than those using the PP
lib. That isn't to say someone else can't do better... I'd like to
see a few ideas if you have any.

> > I think Bjarne's approach is the best way to do that sort of
> > replacement. As long as the PP's functionality is really being
> > replaced by a textual preprocessor (or a token-wise one as we have
> > today) it's going to suffer many of the same problems. Much of those
> > jobs should be filled by a more robust metaprogramming system that's
> > fully integrated into the language and not just a processing phase.
>
> I think here we talk about different things. One path to pursue is

> indeed to provide better means for template programming and another


> is to provide syntactic manipulation. To me, they are different and
> complementary techniques.

Metaprogramming != template programming. In meta-Haskell, they
actually manipulate ASTs in the core language. As I understand the
XTI project, it's going in that sort of direction, though a key link
for metaprogramming is missing.

> > > Maybe namespaces that are so badly designed, you'd think they are
> > > inherited from C?
> >
> > Wow, I'm impressed; that's going to piss off both the hardcore C _and_
> > C++ people!
>
> Heh heh... I knew this is gonna be taken that way :o). What I meant was,
> many shortcomings of C++ root in a need for compatibility with C. With
> namespaces and export, there's no C to blame :o).

I don't know about that. Isn't C's inclusion model a big part of the
reason that namespaces are not more like modules?

> > I've never seen a serious proposal for better namespaces, other than
> > http://boost-consulting.com/writing/qn.html, which seems to have been
> > generally ignored. Have you got any ideas?
>
> That's a good doc solidly motivated; I am sorry it does not get the
> attention that it deserves.

Thanks. Maybe I should re-submit it.

> > > So, a proposal for a new preprocessor would be great.
> >
> > If that's your point, I think it's an interesting one, but somehow I
> > still don't get how it could be appropriate for a workshop on C++
> > libraries.
>
> Ok, I'll drop it.

If you have PP library ideas, by all means bring those up.

> May I still bicker about it on the Usenet? :o)

It's your dime ;-)

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website for Email)

unread,
Aug 13, 2004, 11:11:31 PM8/13/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
> (I believe that the link Dave posted was to Chaos--which is distinct from
> Boost
> Preprocessor.)

I've looked at the existing PP library, not at Chaos.

> What it reminds you of is irrelevant. You know virtually nothing about
> how it
> works--you've never taken the time. Without that understanding, you
> cannot
> critique its elegance or lack thereof.

It seems like my comments have annoyed you, and for a good reason. Please
accept my apologies.

FWIW, what I looked at were usage samples, not at how it works (either Boost
PP or Chaos). Those *usage* examples I deemed as wanting.

> Once again, a quick glance is wholly insufficient. You have not taken the
> time
> to learn the idioms involved. The solutions that Chaos uses *internally*
> are
> indeed far more elegant than you realize. Likewise, the solutions that
> Chaos
> (or Boost PP) engenders through client code is more elegant than you
> realize.
> You simply don't know enough about it to weigh the pros and cons.

Again, I am sorry if I have caused annoyance. I still believe, however, that
you yourself would be happier, and could provide more abstractions, if
better facilities would be available to you, than what the preprocessor
currently offers. That is what I think would be interesting to discuss.


Andrei

Daniel R. James

unread,
Aug 14, 2004, 12:46:05 AM8/14/04
to
> "Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:
> > Today Boost uses a "preprocessor library", which in turn (please
> > correct me if my understanding is wrong) relies on a program to
> > generate some many big macros up to a fixed "maximum" to overcome
> > preprocessor's incapability to deal with variable number of
> > arguments.

Since no one else has pointed this out, it does this to overcome the
preprocessor's lack of recursion, it's nothing to do with variable
arguments.

David Abrahams <da...@boost-consulting.com> wrote in message news:<uzn51r...@boost-consulting.com>...

> In the preprocessor library's _implementation_, there are lots of
> boilerplate program-generated macros, but that's an implementation
> detail that's only needed because so many preprocessors are badly
> nonconforming. In fact, the library's maintainer, Paul Mensonides,
> has a _much_ more elegantly-implemented PP library
> (http://sourceforge.net/projects/chaos-pp/) that has almost no
> boilerplate, but it only works on a few compilers (GCC among them).

Unless I'm missing something, that link goes to an empty sourceforge
project. Which is a pity, because I remember seeing some old chaos
code somewhere, and it looked ace.

Daniel

Paul Mensonides

unread,
Aug 14, 2004, 6:33:09 AM8/14/04
to
"Andrei Alexandrescu (See Website for Email)"
<SeeWebsit...@moderncppdesign.com> wrote

> > generation via the preprocessor typically yields more type-safe code that
> is
> > less error-prone and more maintainable than the alternatives.
>
> Would be interesting to see some examples of that around here. I would be
> grateful if you posted some.

Do you mean a code examples or just general examples? As far as general
examples go, the inability to manipulate the syntax of the language leads to
either replication (which is error-prone, dramatically increases the number of
maintenance points, and obscures the abstraction represented by the totality of
the replicated code) or the rejection of implementation strategies that would
otherwise be superior. The ability to adapt, to deal with variability, is often
implemented with less type-safe, runtime-based solutions simply because the
metalanguage doesn't allow a simpler way to get from conception to
implementation.

Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses variadics
which are likely to be added with C++0x. (It is also a case-in-point of why
variadics are important.)

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/detection/is_empty.h>
#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/facilities/split.h>
#include <chaos/preprocessor/limits.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define TYPELIST(...) TYPELIST_BYPASS(CHAOS_PP_LIMIT_EXPR, __VA_ARGS__)
#define TYPELIST_BYPASS(s, ...) \
CHAOS_PP_EXPR_S(s)(TYPELIST_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_PREV(s), __VA_ARGS__, \
)) \
/**/
#define TYPELIST_INDIRECT() TYPELIST_I
#define TYPELIST_I(_, s, ...) \
CHAOS_PP_IIF _(CHAOS_PP_IS_EMPTY_NON_FUNCTION(__VA_ARGS__))( \
Loki::NilType, \
Loki::TypeList< \
CHAOS_PP_DECODE _(CHAOS_PP_SPLIT _(0, __VA_ARGS__)), \
CHAOS_PP_EXPR_S _(s)(TYPELIST_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_PREV(s), \
CHAOS_PP_SPLIT _(1, __VA_ARGS__) \
)) \
> \
) \
/**/

The TYPELIST macro takes the place of all of the TYPELIST_x macros (and more) at
one time, has facilities to handle types with open commas (e.g. std::pair<int,
int>), and this is more-or-less doing it by hand in Chaos. If you used
facilities already available, you could do the same with one macro:

#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/lambda/ops.h>
#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/expr.h>
#include <chaos/preprocessor/tuple/for_each.h>

#define TYPELIST(...) \
CHAOS_PP_EXPR( \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(Loki::TypeList<) \
CHAOS_PP_DECODE_(CHAOS_PP_ARG(1)) CHAOS_PP_COMMA_(), \
(__VA_ARGS__) \
) \
Loki::NilType \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(>), (__VA_ARGS__) \
) \
) \
/**/

This implementation can process up to ~5000 types and there is no list of 5000
macros anywhere in Chaos. (There are also other, more advanced methods capable
of processing trillions upon trillions of types.)

This example is particularly motivating because it is an example of code used by
clients that is itself client to Chaos. In this case, its primary purpose it
produce facilities for type manipulation (i.e. Loki, MPL, etc.) which raises the
level of abstraction for clients without sacrificing any type safety whatsoever.

> > Safer? In what way? Name clashes? Multiple evaluation?
> >
> > I have probably written more macros than any other person.
>
> I think you mean "I have probably written more C++ macros than any other
> person." That detail is important.

Yes, it is. I was referring to C and C++ macros.

> I'm not one to claim having written lots
> of macros in any language, and I apologize if the amendment above sounds
> snooty. I just think it's reasonable to claim that the C++ preprocessor
> compares very unfavorably with many other languages' means for syntactic
> abstractions.

Yes.

> I totally agree with everything you wrote, but my point was, I believe,
> misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
> that. My post says what tantamounts to "The C++ preprocessor sucks". It
> sucks because it is not a powerful-enough tool. That's why.

It is a powerful enough tool, but it could be easier to employ than it is.

> So let me restate my point. Macros are great. I love macros. Syntactic
> abstractions have their place in any serious language, as you very nicely
> point out. And yes, they are distinct from other means of abstraction. And
> yes, they can be very useful, and shouldn't be banned just because they can
> be misused.
>
> (I'll make a parenthesis here that I think is important. I believe the worst
> thing that the C/C++ preprocessor has ever done is to steer an entire huge
> community away from the power of syntactic abstractions.)

That is an *extremely* good point.

> So, to conclude, my point was that the preprocessor is too primitive a tool
> for implementing syntactic abstractions with.

It could be better, by all means, but it is plenty powerful enough to implement
syntactic abstractions--it is more powerful than most people realize. For
example, the first snippet above is using generalized recursion--recursion
itself can be a shareable, extensible, library facility.

> Let's think wishes. You've done a great many good things with the
> preprocessor, so you are definitely the one to be asked. What features do
> you think would have made it easier for you and your library's clients?

The most fundamental thing would be the ability to separate the first arbitrary
preprocessing token (or whitespace separation) from those that follow it in a
sequence of tokens and be able to classify it in some way (i.e. determine what
kind of token it is and what its value is). The second thing would be the
ability to take a single preprocessing token and deconstruct it into characters.
I can do everything else, but can only do those things in a limited ways.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Aug 14, 2004, 6:33:47 AM8/14/04
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:u1xici...@boost-consulting.com...

> > First, I've looked in my boost implementation to see things like
> > BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0
to
> > BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
> > in my post) is that such macros are generated by a program.
>
> Yes, but as I mentioned none of that is required in std C++.
> http://sourceforge.net/projects/chaos-pp/ doesn't use any
> program-generated macros.

It does use some, but not for algorithmic constructs. E.g. the closest
equivalent (i.e. as feature-lacking as possible) to BOOST_PP_REPEAT under Chaos
is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

Regards,
Paul Mensonides

Paul Mensonides

unread,
Aug 14, 2004, 6:35:35 AM8/14/04
to
"Andrei Alexandrescu (See Website for Email)"
<SeeWebsit...@moderncppdesign.com> wrote in message
news:2o4citF...@uni-berlin.de...

> "Paul Mensonides" <leav...@comcast.net> wrote in message
> > (I believe that the link Dave posted was to Chaos--which is distinct from
> > Boost
> > Preprocessor.)
>
> I've looked at the existing PP library, not at Chaos.

In that case, I agree. Internally, Boost PP is a mess--but a mess caused by
lackluster conformance.

> > What it reminds you of is irrelevant. You know virtually nothing about
> > how it
> > works--you've never taken the time. Without that understanding, you
> > cannot
> > critique its elegance or lack thereof.
>
> It seems like my comments have annoyed you, and for a good reason. Please
> accept my apologies.

I don't mind the comments. I do mind preconceptions. With the preprocessor
there are a great many preconceptions about what it can and cannot do.

Regards,
Paul Mensonides

David Abrahams

unread,
Aug 14, 2004, 9:38:38 AM8/14/04
to
"Paul Mensonides" <leav...@comcast.net> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:u1xici...@boost-consulting.com...
>

Confused. I don't see anything here that looks like a
program-generated macro.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Paul Mensonides

unread,
Aug 14, 2004, 2:32:28 PM8/14/04
to

That was the point. REPEAT is an algorithmic construct that uses recursion, but
it doesn't require macro repetition. However, some lower-level abstractions,
like recursion itself (e.g. EXPR_S) and saturation arithmetic (e.g. DEC),
require macro repetition. For something like recursion, which is not naturally
present in macro expansion, some form of macro repetition will always be
necessary. The difference is that that repetition is hidden behind an
abstraction and the relationship of N macros need not imply on N steps. As
Andrei mentioned, BOOST_PP_REPEAT requires (at least) N macros to repeat N
things. Similarly, BOOST_PP_FOR, BOOST_PP_WHILE, etc., all require (at least) N
macros to perform N steps. That is not the case with Chaos.

#include <chaos/preprocessor/control/inline_when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define FOR(pred, op, macro, state) \
FOR_S(CHAOS_PP_STATE(), pred, op, macro, state) \
/**/
#define FOR_S(s, pred, op, macro, state) \
FOR_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, macro, state\
) \
/**/
#define FOR_INDIRECT() FOR_I
#define FOR_I(_, s, pred, op, macro, state) \
CHAOS_PP_INLINE_WHEN _(pred _(s, state))( \
macro _(s, state) \
CHAOS_PP_EXPR_S _(s)(FOR_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, macro, op _(s, state) \
)) \
) \
/**/

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define WHILE(pred, op, state) \
WHILE_S(CHAOS_PP_STATE(), pred, op, state) \
/**/
#define WHILE_S(s, pred, op, state) \
WHILE_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, state \
) \
/**/
#define WHILE_INDIRECT() WHILE_I
#define WHILE_I(_, s, pred, op, state) \
CHAOS_PP_IIF _(pred _(s, state))( \
CHAOS_PP_EXPR_S _(s)(WHILE_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, op _(s, state) \
)), \
state
) \
/**/

Regards,
Paul Mensonides

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Paul Mensonides

unread,
Aug 15, 2004, 6:54:34 AM8/15/04
to
"Daniel R. James" <dan...@calamity.org.uk> wrote in message

> > (http://sourceforge.net/projects/chaos-pp/) that has almost no
> > boilerplate, but it only works on a few compilers (GCC among them).
>
> Unless I'm missing something, that link goes to an empty sourceforge
> project. Which is a pity, because I remember seeing some old chaos
> code somewhere, and it looked ace.

The project is definitely not empty, it just hasn't made any "official"
releases.

Regards,
Paul Mensonides

Andrei Alexandrescu (See Website for Email)

unread,
Aug 15, 2004, 6:56:33 AM8/15/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
news:IuydnfKYUPw...@comcast.com...

> Regarding actual code examples, here's a Chaos-based version of the old
> TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
> variadics
> which are likely to be added with C++0x. (It is also a case-in-point of
> why
> variadics are important.)

Cool. Before continuing the discussion, I have a simple question - how does
your implementation cope with commas in template types, for example:

TYPELIST(vector<int, my_allocator<int> >, vector<float>)

would correctly create a typelist of two elements? If not, what steps do I
need to take to creat such a typelists (aside from a typedef)?


Andrei

galathaea

unread,
Aug 15, 2004, 7:02:50 AM8/15/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> wrote in message news:<2nqfupF...@uni-berlin.de>...

> "Jeremy Siek" <jerem...@gmail.com> wrote in message
> news:21925601.04080...@posting.google.com...
> > CALL FOR PAPERS/PARTICIPATION
> >
> > C++, Boost, and the Future of C++ Libraries
> > Workshop at OOPSLA
> > October 24-28, 2004
> > Vancouver, British Columbia, Canada
> > http://tinyurl.com/4n5pf
> [snip]
>
> I wonder if the submitters follow this post's trail or I should email
> them... anyway, here goes.
>
> I am not sure if I'll ever get around to writing it, so I said I'd post the
> idea here, maybe someone will pursue it. In short, I think a proposal for a
> replacement of C++'s preprocessor would be, I think, welcome.
>

There is only one replacement for the c++ preprocessor which I would
consider truly up to c++'s potential as a competitive language into
the near future: metacode. Full metacode capabilities, not just a
minor update to template capabilities.

By this I mean the capability to walk the parse tree at compile time
and perform transformations in a meta-type safe manner (analagous to
full second-order lambda capability such as System F). Vandevoorde's
metacode seems a good step in this direction, but I really think that
such a proposal must be as complete as possible (and not a library
proposal but a full language extension).

Consider some of the things I've seen talked about recently on the
newsgroups. Injecting a function call after all constructors have
executed is certainly one of those things the language should allow,
but currently we must work 'around' the language by forcing the use of
factories in such cases or leaving the two-stage use to clients (never
a good idea). In terms of functional relationships, though (even in
the presence of exceptions), such a task is a simple injection in
terms of functional orderings of the ctors and we are currently made
to fight with the language definitions enforced by the compiler.

Or consider the place where I currently use the Boost preprocessing
library the most: serialisation of classes. If we were allowed to
walk the member list for a class, walk its inheritance graph, and
stringise the class names (or produce better unique identifiers),
serialisation would be a cakewalk. Unfortunately, it is made much
more difficult and requires a more difficult object definition model
if any of the tasks of serialisation are to be automated by a library.

And of course there is all that control over the exception path
process, pattern generation, and general aspect functionsl
relationship injection that programmers have been crying about for
years.

> Today Boost uses a "preprocessor library", which in turn (please correct me
> if my understanding is wrong) relies on a program to generate some many big
> macros up to a fixed "maximum" to overcome preprocessor's incapability to
> deal with variable number of arguments.

Others have pointed out that this is much more a nonconformance issue
than it is an inherent preprocessor limitation, but I'd like to stress
that a fully recursive code generation system in c++ would not present
such problems.

> Also, please correct me if I'm wrong (because I haven't really looked deep
> into it), but my understanding is that people around Boost see the PP
> library as a necessary but unpleasantly-smelling beast that makes things
> around it smelly as well. [Reminds me of the Romanian story: there was a guy
> called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
> beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
> condition: that Pepelea owns a nail in the living room's wall, in which he
> can hang whatever he wanted. Now when the rich man was having guests and
> whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
> course in the end the rich man got so exasperated that he gave Pepelea the
> house back for free. Ever since that story, "Pepelea's nail" is referred to
> as something like... like what the preprocessor is to the C++ language.]

You are certainly a storyteller, but I'd give the c++ preprocessor
more credit. It has been the only method to acheive certain
limitations (shortfalls in complete second-order lambda
expressiveness) when needed by the coder. Indeed, if you take one of
the coders duties as minimising the updates needed by future feature
revisions of other coders, the preprocessor has always had its place
secure from typed language features. Again, serialisation has been my
major use, but other reasons for, for instance, type to string
conversion include interception of API's through lookup in the
import/export lists of the modules. A full metacode capability would
make that obsolete (walking the symbol table should be as easy as
walking the parse tree itself).

> That would be reason one to create a new C++ preprocessor. (And when I say
> "new," that's not like in "yet another standard C++ preprocessor". I have
> been happy to see my suggestion on the Boost mailing list followed in that
> the WAVE preprocessor was built using Boost's own parser generator library,
> Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
> aimed at displacing the existing preprocessor forever and replacing it with
> a better one".

Certainly full metacoding capabilities would make this obsolete...

> If backed by the large Boost community, the new preprocessor could easily
> gain popularity and be used in new projects instead of the old one. To avoid
> inheriting past's mistakes, the new preprocessor doesn't need to be
> syntax-compatible in any way with the old preprocessor, but only
> functionally compatible, in that it can do all that can be done with the
> existing preprocessor, only that it has new means to do things safer and
> better.
>
> I think that would be great. Because it we all stop coding for a second and
> think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
> Maybe "export" which is so broken and so useless and so abusive that its
> implementers have developed Stockholm syndrome during the long years that
> took them to implement it? Maybe namespaces that are so badly designed,
> you'd think they are inherited from C? I'd say they are good contenders
> against each other, but none of them holds a candle to the preprocessor.

If we extend the idea of metacode to all of the translation process,
in other words if the programmer were to have control points inserted
into all parts of the code generation process, then export would never
have been a problem to begin with. Unfortunately, the c++
standardisation community feels that processes like linking are
sacrilege and not to be touched by regulation. If a full parse tree
walk were to include the ability to load other translation units and
manipulate their trees, then we wouldn't find export a 'scar' or in
any way difficile.

You know, if the c++ committee had the cojones to make standardisation
over the full translation process, we might even see dynamic linking a
possibility for the next language revision.

> So, a proposal for a new preprocessor would be great. Here's a short wish
> list:
>
> * Does what the existing one does (although some of those coding patterns
> will be unrecommended);
>
> * Supports one-time file inclusion and multiple file inclusion, without the
> need for guards (yes, there are subtle issues related to that... let's at
> least handle a well-defined subset of the cases);
>
> * Allows defining "hygienic" macros - macros that expand to the same text
> independent on the context in which they are expanded;
>
> * Allows defining scoped macros - macros visible only within the current
> scope;
>
> * Has recursion and possibly iteration;
>
> * Has a simple, clear expansion model (negative examples abound - NOT like
> m4, NOT like tex... :o))
>
> * Supports variable number of arguments. I won't venture into thinking of
> more cool support a la scheme or dylan ofr Java Extender macros.

I'm not a big fan of textual macros when typed completion gives the
same computational capability. I really think that metacoding,
architecture generation, and all of those great things we come to look
for in AOP and generative intentional programming is what the c++
standards commitee should focus on. With that type of capability, a
"pre"-processor is superfluous.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

galathaea: prankster, fablist, magician, liar

Paul Mensonides

unread,
Aug 16, 2004, 6:42:03 AM8/16/04
to
"Andrei Alexandrescu (See Website for Email)"
<SeeWebsit...@moderncppdesign.com> wrote in message
news:2o7khqF...@uni-berlin.de...

> "Paul Mensonides" <leav...@comcast.net> wrote in message
> news:IuydnfKYUPw...@comcast.com...
> > Regarding actual code examples, here's a Chaos-based version of the old
> > TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
> > variadics
> > which are likely to be added with C++0x. (It is also a case-in-point of
> > why
> > variadics are important.)
>
> Cool. Before continuing the discussion, I have a simple question - how does
> your implementation cope with commas in template types, for example:
>
> TYPELIST(vector<int, my_allocator<int> >, vector<float>)
>
> would correctly create a typelist of two elements? If not, what steps do I
> need to take to creat such a typelists (aside from a typedef)?

You'd just have to parenthesize types that contain open commas:

TYPELIST((vector<int, my_allocator<int> >), vector<float>)

The DECODE macro in the example removes parentheses if they exist. E.g.

CHAOS_PP_DECODE(int) // int
CHAOS_PP_DECODE((int)) // int

Using parentheses is, of course, only necessary for types that contain open
commas. (There is also a ENCODE macro that completes the symmetry, but it is
unnecessary.)

There are also a several other alternatives. It is possible to pass a type
through a system of macros without the system of macros being intrusively
modified (with DECODE or similar). This, in Chaos terminology, is a "rail". It
is a macro invocation that effectively won't expand until some context is
introduced. E.g.

#include <chaos/preprocessor/punctuation/comma.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

A(CHAOS_PP_COMMA())

This will error with too many arguments to B. However, the following disables
evaluation of COMMA() until after the system "returns" from A:

#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_UNSAFE_RAIL(CHAOS_PP_COMMA)()
))

(There is also CHAOS_PP_RAIL that is similar, but getting into the difference
here is too complex a subject.)

In any case, the expansion of COMMA is inhibited until it reaches the context
established by WALL. The same thing can be achieved for types non-intrusively.
Chaos has two rail macros designed for this purpose, TYPE and TYPE_II. The
first, TYPE, is the most syntactically clean, but is only available with
variadics:

#include <chaos/preprocessor/facilities/type.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_TYPE(std::pair<int, int>)
))
// std::pair<int, int>

The second, TYPE_II, is more syntactically verbose, but it works even without
variadics without counting commas:

#include <chaos/preprocessor/facilities/type.h>
#include <chaos/preprocessor/recursion/rail.h>

#define A(x) B(x)
#define B(x) C(x)
#define C(x) D(x)
#define D(x) x

CHAOS_PP_WALL(A(
CHAOS_PP_TYPE_II(CHAOS_PP_BEGIN std::pair<int, int> CHAOS_PP_END)
))
// std::pair<int, int>

Thus, you *could* make a typelist using rails such as this to protect
open-comma'ed types, but for typelists (which inherently deal with types), it
would be pointless. Rails are more useful when some arbitrary data that you
need to pass around happens to be a type, but doesn't necessarily have to be.

Regards,
Paul Mensonides

Andrei Alexandrescu (See Website for Email)

unread,
Aug 16, 2004, 4:04:54 PM8/16/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
news:IuydnfKYUPw...@comcast.com...
(from another post)

> The DECODE macro in the example removes parentheses if they exist. E.g.
>
> CHAOS_PP_DECODE(int) // int
> CHAOS_PP_DECODE((int)) // int

That's what I was hoping for; thanks.

(back to this other post)

I am sure I will develop a lot more appreciation for this solution once I
will fully understand all of the clever techniques and idioms used.

For now, I hope you will agree with me that the above fosters learning yet
another programming style, which is different than straight programming,
template programming, or MPL-based programming.

I would also like to compare the solution above with the "imaginary" one
that I have in mind as a reference. It uses LISP-macros-like artifacts and a
few syntactic accoutrements.

$define TYPELIST() { Loki::NullType }
$define TYPELIST(head $rest more) {
Loki::Typelist< head, TYPELIST(more) >
}

About all that needs to be explained is that "$rest name" binds name to
whatever other comma-separated arguments follow, if any, and that the
top-level { and } are removed when creating the macro.

If you would argue that your version above is more or as elegant as this
one, we have irreducible opinions. I consider your version drowning in
details that have nothing to do with the task at hand, but with handling the
ways in which the preprocessor is inadequate for the task at hand. Same
opinion goes for the other version below:

> #include <chaos/preprocessor/facilities/encode.h>
> #include <chaos/preprocessor/lambda/ops.h>
> #include <chaos/preprocessor/punctuation/comma.h>
> #include <chaos/preprocessor/recursion/expr.h>
> #include <chaos/preprocessor/tuple/for_each.h>
>
> #define TYPELIST(...) \
> CHAOS_PP_EXPR( \
> CHAOS_PP_TUPLE_FOR_EACH( \
> CHAOS_PP_LAMBDA(Loki::TypeList<) \
> CHAOS_PP_DECODE_(CHAOS_PP_ARG(1)) CHAOS_PP_COMMA_(), \
> (__VA_ARGS__) \
> ) \
> Loki::NilType \
> CHAOS_PP_TUPLE_FOR_EACH( \
> CHAOS_PP_LAMBDA(>), (__VA_ARGS__) \
> ) \
> ) \
> /**/

> This implementation can process up to ~5000 types and there is no list of
> 5000
> macros anywhere in Chaos. (There are also other, more advanced methods
> capable
> of processing trillions upon trillions of types.)

I guess you have something that increases with the logarithm if that number,
is that correct?

> > Let's think wishes. You've done a great many good things with the
> > preprocessor, so you are definitely the one to be asked. What features
> > do
> > you think would have made it easier for you and your library's clients?
>
> The most fundamental thing would be the ability to separate the first
> arbitrary
> preprocessing token (or whitespace separation) from those that follow it
> in a
> sequence of tokens and be able to classify it in some way (i.e. determine
> what
> kind of token it is and what its value is). The second thing would be the
> ability to take a single preprocessing token and deconstruct it into
> characters.
> I can do everything else, but can only do those things in a limited ways.

I understand the first desideratum, but not the second. What would be the
second thing beneficial for?


Andrei

Paul Mensonides

unread,
Aug 17, 2004, 5:47:20 AM8/17/04
to
Andrei Alexandrescu (See Website for Email) wrote:

> I am sure I will develop a lot more appreciation for this solution
> once I will fully understand all of the clever techniques and idioms
> used.

Yes.

> For now, I hope you will agree with me that the above fosters
> learning yet another programming style, which is different than
> straight programming, template programming, or MPL-based programming.

Definitely. It is a wholly different language.

> I would also like to compare the solution above with the "imaginary"
> one that I have in mind as a reference. It uses LISP-macros-like
> artifacts and a few syntactic accoutrements.
>
> $define TYPELIST() { Loki::NullType }
> $define TYPELIST(head $rest more) {
> Loki::Typelist< head, TYPELIST(more) >
> }
>
> About all that needs to be explained is that "$rest name" binds name
> to whatever other comma-separated arguments follow, if any, and that
> the top-level { and } are removed when creating the macro.

So, for the above you need (basically) two things: overloading on number of
arguments and recursion. Both of those things are already indirectly possible
(with the qualification that overloading on number of arguments is only possible
with variadics). That isn't to say that those facilities wouldn't be useful
features of the preprocessor, because they would. I'm merely referring to those
things which can be done versus those things which cannot with the preprocessor
as it currently exists. I'm concerned more with functionality than I am with
syntactic cleanliness.

> If you would argue that your version above is more or as elegant as
> this one, we have irreducible opinions.

The direct "imaginary" version is obviously more elegant.

> I consider your version
> drowning in details that have nothing to do with the task at hand,
> but with handling the ways in which the preprocessor is inadequate
> for the task at hand. Same opinion goes for the other version below:

But the preprocessor *is* adequate for the task. It just isn't as syntactically
clean as you'd like it to be.

>> This implementation can process up to ~5000 types and there is no
>> list of 5000
>> macros anywhere in Chaos. (There are also other, more advanced
>> methods capable
>> of processing trillions upon trillions of types.)
>
> I guess you have something that increases with the logarithm if that
> number, is that correct?

Exponential structure, yes. A super-reduction of the idea is this:

#define A(x) B(B(x))
#define B(x) C(C(x))
#define C(x) x

Here, the x argument gets scanned for expansion with a base-2 exponential. With
about 25 macros you're already into millions of scans. Each of those scans can
be an arbitrary computational step.

>> The most fundamental thing would be the ability to separate the first
>> arbitrary
>> preprocessing token (or whitespace separation) from those that
>> follow it in a
>> sequence of tokens and be able to classify it in some way (i.e.
>> determine what
>> kind of token it is and what its value is). The second thing would
>> be the ability to take a single preprocessing token and deconstruct
>> it into characters.
>> I can do everything else, but can only do those things in a limited
>> ways.
>
> I understand the first desideratum, but not the second. What would be
> the second thing beneficial for?

Identifier and number processing primarily, but also string and character
literals. Given those two things alone you could write a C++ interpreter with
the preprocessor--or, much more simply, you could trivially write your imaginary
example above. Speaking of which, you can already write interpreters that get
close. The one thing that you cannot do is get beyond of arbitrary
preprocessing tokens. They would have to be quoted in some way.

Regards,
Paul Mensonides

Arkadiy Vertleyb

unread,
Aug 17, 2004, 5:48:07 AM8/17/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message news:<CNCdnas4rM_...@comcast.com>...

> "Andrei Alexandrescu (See Website for Email)"
> <SeeWebsit...@moderncppdesign.com> wrote in message
> news:2o4citF...@uni-berlin.de...
> > "Paul Mensonides" <leav...@comcast.net> wrote in message
> > > (I believe that the link Dave posted was to Chaos--which is distinct from
> > > Boost
> > > Preprocessor.)
> >
> > I've looked at the existing PP library, not at Chaos.
>
> In that case, I agree. Internally, Boost PP is a mess--but a mess caused by
> lackluster conformance.

I think the actual value of a library can be determined as a
difference between the "mess" it gets as its input, and the "mess" (if
some still remains) its user gets on its output. In this regard, IMO,
it's difficult to overestimate the value of the Boost PP library, no
matter how messy its implementation might be.

(Just a thought from one of recently converted former PP-haters)

Regards,
Arkadiy

Daveed Vandevoorde

unread,
Aug 17, 2004, 5:54:50 AM8/17/04
to
"Andrei Alexandrescu wrote:
[...]

> Maybe "export" which is so broken and so useless and so abusive that its
> implementers have developed Stockholm syndrome during the long years that
> took them to implement it?

How is "export" useless and broken?

Have you used it for any project? I find it very pleasant
to work with in practice.

Daveed

Andrei Alexandrescu (See Website for Email)

unread,
Aug 17, 2004, 3:57:22 PM8/17/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
news:ct6dnYwsFuE...@comcast.com...

> > For now, I hope you will agree with me that the above fosters
> > learning yet another programming style, which is different than
> > straight programming, template programming, or MPL-based programming.
>
> Definitely. It is a wholly different language.

Not it only remains for me to convince you that that's a disadvantage :o).

> So, for the above you need (basically) two things: overloading on number
> of
> arguments and recursion. Both of those things are already indirectly
> possible
> (with the qualification that overloading on number of arguments is only
> possible
> with variadics). That isn't to say that those facilities wouldn't be
> useful
> features of the preprocessor, because they would. I'm merely referring to
> those
> things which can be done versus those things which cannot with the
> preprocessor
> as it currently exists. I'm concerned more with functionality than I am
> with
> syntactic cleanliness.

I disagree it's only syntactic cleanliness. Lack of syntactic cleanliness is
the CHAOS_PP_ that you need to prepend to most of your library's symbols.
But let me pull the code again:

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

As far as I understand, REPEAT, REPEAT_S, REPEAT_INDIRECT, REPEAT_I, and the
out-of-sight CHAOS_PP_STATE, CHAOS_PP_OBSTRUCT, CHAOS_PP_EXPR_S are dealing
with the preprocessor alone and have zero relevance to the task. The others
implement an idiom for looping that I'm sure one can learn, but is far from
familiar to a C++ programmer. To say that that's just a syntactic
cleanliness thing is a bit of a stretch IMHO. By the same argument, any
Turing complete language will do at the cost of "some" syntactic
cleanliness.

> > I consider your version
> > drowning in details that have nothing to do with the task at hand,
> > but with handling the ways in which the preprocessor is inadequate
> > for the task at hand. Same opinion goes for the other version below:
>
> But the preprocessor *is* adequate for the task. It just isn't as
> syntactically
> clean as you'd like it to be.

I maintain my opinion that we're talking about more than syntactic
cleanliness here. I didn't say the preprocessor is "incapable" for the task.
But I do believe (and your code strengthened my belief) that it is
"inadequate". Now I looked on www.m-w.com and I saw that inadequate means "
: not adequate : INSUFFICIENT; also : not capable " and that adequate means
"sufficient for a specific requirement" and "lawfully and reasonably
sufficient". I guess I meant it as a negation of the last meaning, and even
that is a bit too strong. Obviously the preprocessor is "capable", because
hey, there's the code, but it's not, let me rephrase - very "fit" for the
task.

> > I guess you have something that increases with the logarithm if that
> > number, is that correct?
>
> Exponential structure, yes. A super-reduction of the idea is this:
>
> #define A(x) B(B(x))
> #define B(x) C(C(x))
> #define C(x) x
>
> Here, the x argument gets scanned for expansion with a base-2 exponential.
> With
> about 25 macros you're already into millions of scans. Each of those
> scans can
> be an arbitrary computational step.

Wouldn't it be nicer if you just had one mechanism (true recursion or
iteration) that does it all in one shot?


Andrei

Walter

unread,
Aug 17, 2004, 6:03:15 PM8/17/04
to

"Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
news:52f2f9cd.04081...@posting.google.com...

> "Andrei Alexandrescu wrote:
> [...]
> > Maybe "export" which is so broken and so useless and so abusive that
its
> > implementers have developed Stockholm syndrome during the long years
that
> > took them to implement it?
>
> How is "export" useless and broken?
>
> Have you used it for any project? I find it very pleasant
> to work with in practice.

>From my readings on export, the benefits are supposed to be:

1) avoid pollution of the name space with the names involved in the
implementation details of the template, sometimes called "code hygene"

2) template source code hiding

3) faster compilation

Examining each of these in turn:

1) Isn't this what C++ namespaces are for?

2) Given the ease with which Java .class files are "decompiled" back into
source code, and the fact that precompiled templates will necessarilly
contain even more semantic info than .class files, it is hard to see how
exported templates offer secure hiding of template implementations. It is
not analagous to the problem of "turning hamburger back into a cow" that
object file decompilers have. While some "security through obscurity" may be
achieved by not documenting the file format, if the particular compiler
implementation is popular, there surely will appear some tool to do it.

3) The faster compilation is theoretically based on the idea that the
template implementation doesn't need to be rescanned and reparsed every time
it is #include'd. However, many modern C++ compilers already support
"precompiled headers", which already provide just that capability without
export at all. I'd be happy to accept a compilation speed benchmark
challenge of Digital Mars DMC++ with precompiled headers vs an export
implementation.

I look at export as a cost/benefit issue. What are the benefits, and what
are the costs? The benefits, as discussed above, are not demonstrated to be
significant. The cost, however, is enormous - 2 to 3 man years of
implementation effort, which means that other, more desirable, features
would necessarilly get deferred/delayed.

What export does is attempt to graft some import model semantics onto the
inclusion model semantics. The two are fundamentally at odds, hence all the
complicated rules and implementation effort. The D Programming Language
simply abandons the inclusion model semantics completely, and goes instead
with true imported modules. This means that exported templates in D are
there "for free", i.e. they involve no extra implementation effort and no
strange rules. And after using them for a while, yes it is very pleasant to
be able to do:

----- foo.d ----
template Foo(T) { T x; }
----- bar.d ----
import foo;

foo.Foo!(int); // instantiate template Foo with 'int' type
----------------

-Walter
www.digitalmars.com free C/C++/D compilers

Hyman Rosen

unread,
Aug 18, 2004, 8:08:33 AM8/18/04
to
Walter wrote:
>From my readings on export, the benefits are supposed to be:

The benefits of export are the same as the benefits of
separate compilation of non-templated code. That is, for
normal code, I can write

In joe.h:
struct joe { void frob(); double gargle(double); };
In joe.c:
namespace { void fiddle() { } double grok() { return 3.7; } }
void joe::frob() { fiddle(); }
double joe::gargle(double d) { return d + grok(); }

And users of the joe class only include joe.h, and never
have to worry about joe.c. Without export, if joe were a
template class, then every compilation unit which uses a
method of joe would have to include the implementation of
those methods bodily. This constrains the implementation;
for example, that anonymous namespace wouldn't work. With
export, users include the header file and they are done.

tom_usenet

unread,
Aug 18, 2004, 2:27:33 PM8/18/04
to
On 17 Aug 2004 18:03:15 -0400, "Walter"
<wal...@digitalmars.nospamm.com> wrote:

>
>"Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
>news:52f2f9cd.04081...@posting.google.com...
>> "Andrei Alexandrescu wrote:
>> [...]
>> > Maybe "export" which is so broken and so useless and so abusive that
>its
>> > implementers have developed Stockholm syndrome during the long years
>that
>> > took them to implement it?
>>
>> How is "export" useless and broken?
>>
>> Have you used it for any project? I find it very pleasant
>> to work with in practice.
>
>>From my readings on export, the benefits are supposed to be:
>
>1) avoid pollution of the name space with the names involved in the
>implementation details of the template, sometimes called "code hygene"
>
>2) template source code hiding
>
>3) faster compilation

4) avoid pollution of the lookup context in which the template
definition exists with names from the instantiation context, except
where these are required (e.g. dependent names).

5) reduce dependencies

>Examining each of these in turn:

Hmm, this is all rehashing I think.

>1) Isn't this what C++ namespaces are for?

That ignores macros and argument dependent lookup, which transcend
namespaces (or rather operate in a slightly unpredictable set of
namespaces in the case of the latter).

>2) Given the ease with which Java .class files are "decompiled" back into
>source code, and the fact that precompiled templates will necessarilly
>contain even more semantic info than .class files, it is hard to see how
>exported templates offer secure hiding of template implementations. It is
>not analagous to the problem of "turning hamburger back into a cow" that
>object file decompilers have. While some "security through obscurity" may be
>achieved by not documenting the file format, if the particular compiler
>implementation is popular, there surely will appear some tool to do it.

Precompiled templates don't need more semantic information than class
files. In particular, all code involving non-dependent names can be
fully compiled, or at the very least, the names can be removed from
the precompiled template file. In other words, the file format might
consist of a combination of ordinary object code intermingled with
other more detailed stuff.

I believe that EDG may be working on something related to this, but
they're keeping fairly schtum about it so I don't know the details.

>3) The faster compilation is theoretically based on the idea that the
>template implementation doesn't need to be rescanned and reparsed every time
>it is #include'd. However, many modern C++ compilers already support
>"precompiled headers", which already provide just that capability without
>export at all. I'd be happy to accept a compilation speed benchmark
>challenge of Digital Mars DMC++ with precompiled headers vs an export
>implementation.

The compilation speed advantages also come from dependency reductions.
If template definitions are modified, only the template
specializations need to be recompiled. If the instantiation context
(which I believe only consists of all extern names) is saved in each
case, then the instantiation can be recompiled without having to
recompile the whole TU containing the implicit template instantiation.

>I look at export as a cost/benefit issue. What are the benefits, and what
>are the costs? The benefits, as discussed above, are not demonstrated to be
>significant. The cost, however, is enormous - 2 to 3 man years of
>implementation effort, which means that other, more desirable, features
>would necessarilly get deferred/delayed.
>
>What export does is attempt to graft some import model semantics onto the
>inclusion model semantics. The two are fundamentally at odds, hence all the
>complicated rules and implementation effort. The D Programming Language
>simply abandons the inclusion model semantics completely, and goes instead
>with true imported modules. This means that exported templates in D are
>there "for free", i.e. they involve no extra implementation effort and no
>strange rules. And after using them for a while, yes it is very pleasant to
>be able to do:
>
>----- foo.d ----
>template Foo(T) { T x; }
>----- bar.d ----
>import foo;
>
>foo.Foo!(int); // instantiate template Foo with 'int' type
>----------------

It seems to me that there were three alternatives in C++.

1. Don't support any kind of model except the inclusion one. If this
is done, two-phase name lookup should have been dropped as well, since
it is confusing at best, and only really necessary if export is to be
supported. It catches some errors earlier, but at some expense to
programmers. The "typename" and "template" disambiguators could also
blissfully be dropped.

2. Add module support to C++. This obviously is as large a proposal as
export, but clearly #include is unhelpful in a language as complex as
C++; really you just want to import extern names from particular
namespaces, not textually include a whole file.

3. Support separate compilation of templates in some form, without
modules. If you do this, I think you pretty much end up with two phase
name lookup and export (indicating that it isn't broken).

The committee rejected 1, I doubt anyone suggested 2, so 3 was the
remaining choice.

Personally, I might well have gone with 1; templates are complicated
enough, and two-phase name lookup and export have unnecessarily made
them much more complex. On the other hand, 1 doesn't provide the
benefits that export does provide.

So, ignoring implementation difficultly, I think export does just win
as a useful feature. With the implementation difficultly, it's not so
clear.

Tom

Andrei Alexandrescu (See Website for Email)

unread,
Aug 18, 2004, 2:42:26 PM8/18/04
to
"Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
news:52f2f9cd.04081...@posting.google.com...
> "Andrei Alexandrescu wrote:
> [...]
> > Maybe "export" which is so broken and so useless and so abusive that its
> > implementers have developed Stockholm syndrome during the long years
> > that
> > took them to implement it?
>
> How is "export" useless and broken?
>
> Have you used it for any project? I find it very pleasant
> to work with in practice.

Haven't used export, and not because I didn't wanna.

[Whom do you think I referred to when mentioning the Stockholm syndrome?
:o)]

I'd say, a good feature, like a good business idea, can be explained in a
few words. What is the few-words good explanation of export? (I *am*
interested.)

In addition, a good programming language feature does what it was intended
to do (plus some other neat things :o)), and can be reasonably implemented.

I think we all agree "export" fell the last test.

Does it do what it was intended to do? (Again, I *am* interested.)

A summary of what's the deal with export would be of great help to at least
myself, so I'd be indebted to anyone who'd give me one. For full disclosure,
my current perception is:

1. It's hard to give a good account of what export does in a few words, at
least an account that's impressive.

2. export failed horribly at doing what was initially supposed to do. I
believe what it was supposed to do was true (not "when"s and "cough"s and
"um"s) separate compilation of templates. Admittedly, gaffes in other areas
of language design are at fault for that failure. Correct me if I'm wrong.

3. export failed miserably at being reasonably easy to implement.

Combined with 1 and 2, I can only say: at the very best, export is a Pyrrhic
victory.


Andrei

Thorsten Ottosen

unread,
Aug 18, 2004, 2:44:45 PM8/18/04
to
"Hyman Rosen" <hyr...@mail.com> wrote in message news:10927814...@master.nyc.kbcfp.com...

| Walter wrote:
| >From my readings on export, the benefits are supposed to be:
|
| The benefits of export are the same as the benefits of
| separate compilation of non-templated code.

| With


| export, users include the header file and they are done.

ok, but with separate compilation we get faster compilation. How much faster will/can the export version be?

Currently is also seriously tedious to implement class template member functions outside the class. I hope experience with
export can help promote class namespaces as described by Carl Daniel (see
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1420.pdf ).

br

Thorsten

Walter

unread,
Aug 18, 2004, 2:47:18 PM8/18/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10927814...@master.nyc.kbcfp.com...
> The benefits of export are the same as the benefits of
> separate compilation of non-templated code. That is, for
> normal code, I can write
>
> In joe.h:
> struct joe { void frob(); double gargle(double); };
> In joe.c:
> namespace { void fiddle() { } double grok() { return 3.7; } }
> void joe::frob() { fiddle(); }
> double joe::gargle(double d) { return d + grok(); }
>
> And users of the joe class only include joe.h, and never
> have to worry about joe.c.

They would for templates, since the compiler will need to precompile it at
some point. You'd still have to put a dependency on joe.c in the makefile,
etc., since there's now an order to compiling the source files. Furthermore,
there's no indication to the compiler that the template implementation is in
joe.c, so some sort of cross reference would need building or it'd need to
be manually specified. That's all doable, of course, and is not that big an
issue, but I wished to point out that it isn't quite as simple as object
files are. A similar procedure is necessary for precompiled headers.

> Without export, if joe were a
> template class, then every compilation unit which uses a
> method of joe would have to include the implementation of
> those methods bodily.

Yes, that's right. But just what are the benefits of separate compilation?
They are the 3 I mentioned. There is no semantic benefit that namespaces
can't address. (The old problem of needing to separate things because of
insufficient memory for compilation has faded away.)

> This constrains the implementation;
> for example, that anonymous namespace wouldn't work.

I don't understand why namespaces wouldn't do the job. Isn't that kind of
problem exactly what namespaces were designed to solve?

>With export, users include the header file and they are done.

So, we have, for the user:
export template foo ...
v.s.
#include "foo_implementation.h"

and they're done in either case. Sure, the former is slightly prettier, but
if we're going to overhaul C++ for aesthetic appeal, I'd do a lot of things
that are a LOT easier to implement before that one <g>.

Walter

unread,
Aug 19, 2004, 8:02:44 AM8/19/04
to

"tom_usenet" <tom_u...@hotmail.com> wrote in message
news:9kg6i01j7jnhjrui1...@4ax.com...

> On 17 Aug 2004 18:03:15 -0400, "Walter"
> <wal...@digitalmars.nospamm.com> wrote:
> >>From my readings on export, the benefits are supposed to be:
> >
> >1) avoid pollution of the name space with the names involved in the
> >implementation details of the template, sometimes called "code hygene"
> >
> >2) template source code hiding
> >
> >3) faster compilation
>
> 4) avoid pollution of the lookup context in which the template
> definition exists with names from the instantiation context, except
> where these are required (e.g. dependent names).

Shouldn't namespaces should cover this? That's what they're for.

> 5) reduce dependencies

I think that is another facet of 1 and 4.

> >Examining each of these in turn:
>
> Hmm, this is all rehashing I think.
>
> >1) Isn't this what C++ namespaces are for?
>
> That ignores macros

I'll concede that it can help with macros, though I suggest that the
problems with macros remain and would be far better addressed with things
like scoped macros. Export is a particularly backwards way to solve problems
with the preprocessor, sort of like fixing rust by putting duct tape over it
<g>. Good practice with macros is to treat them all as having potentially
global effect.

> and argument dependent lookup, which transcend
> namespaces (or rather operate in a slightly unpredictable set of
> namespaces in the case of the latter).

I don't see how this would be a problem.


> >2) Given the ease with which Java .class files are "decompiled" back into
> >source code, and the fact that precompiled templates will necessarilly
> >contain even more semantic info than .class files, it is hard to see how
> >exported templates offer secure hiding of template implementations. It is
> >not analagous to the problem of "turning hamburger back into a cow" that
> >object file decompilers have. While some "security through obscurity" may
be
> >achieved by not documenting the file format, if the particular compiler
> >implementation is popular, there surely will appear some tool to do it.
>
> Precompiled templates don't need more semantic information than class
> files. In particular, all code involving non-dependent names can be
> fully compiled,

There's very little of that in templates, otherwise, they wouldn't need to
be templates. Realistically, I just don't see compiler vendors trying to mix
object code with syntax trees - the implementation cost is very high and the
benefit for a typical template is nil.

> or at the very least, the names can be removed from
> the precompiled template file.

Removing a few names doesn't help much - .class files remove names from all
the locals, and that hasn't even slowed down making a decompiler for it.

> In other words, the file format might
> consist of a combination of ordinary object code intermingled with
> other more detailed stuff.

Consider that within the precompiled template file, the compiler will need
to extract the names and offsets of all the members of the template classes,
coupled with the syntax trees of all the template functions, waiting to be
decorated with names and types. That is much more than what's in a .class
file. Another way to see this is that template files will necessarilly be
*before* the semantic stage of the compiler, whereas .class files are
generated *after* the semantic stage; more information can be thrown away in
the latter case.


> >3) The faster compilation is theoretically based on the idea that the
> >template implementation doesn't need to be rescanned and reparsed every
time
> >it is #include'd. However, many modern C++ compilers already support
> >"precompiled headers", which already provide just that capability without
> >export at all. I'd be happy to accept a compilation speed benchmark
> >challenge of Digital Mars DMC++ with precompiled headers vs an export
> >implementation.
> The compilation speed advantages also come from dependency reductions.
> If template definitions are modified, only the template
> specializations need to be recompiled. If the instantiation context
> (which I believe only consists of all extern names) is saved in each
> case, then the instantiation can be recompiled without having to
> recompile the whole TU containing the implicit template instantiation.

I'll still be happy to do the benchmark <g>. In my experience with projects
that attempted to maintain a complex dependency database, the time spent
maintaining it exceeded the time spent doing a global rebuild. Worse, the
dependency database was a rich source of bugs, so whenever there were
problems with the result, the first thing one tried was a global rebuild.
(For an example, consider "incremental linker" technology. You'd think that
would be a fairly easy problem, but incremental linkers suffered from so
many bugs that the first step in debugging was to do a clean build. Also,
Digital Mars' optlink does a full build faster than the incremental linkers
do an incremental one. Another example I know of is one where the dependency
database caused a year delay in the project and never did work right,
arguably causing the eventual failure of the entire project.)


> So, ignoring implementation difficultly, I think export does just win
> as a useful feature. With the implementation difficultly, it's not so
> clear.

My problem is with the "ignoring implementation difficulty" bit. I attended
some of the C++ meetings early on, and a clear attitude articulated to me by
more than one member was that implementation difficulty was irrelevant, only
the user experience mattered. My opinion then, as now, is that
implementation difficulty strongly affects the users, since:

1) hard to implement features take a long time to implement, years in the
case of export, so users have to wait
2) hard to implement features usually result in buggy implementations that
take a long time to shake out
3) compiler vendors implement features in different orders, leading to
incompatible implementations
4) spending time on hard to implement features means that other features,
potentially more desirable to users, get backburnered

And we've all seen 1..4 impacting the users for years on end, and is still
ongoing.

There's another issue that may or may not matter depending on who you talk
to: hard to implement features wind up shrinking the number of
implementations. Back in the 80's, I once counted 30 different C compilers
available for just the IBM PC. How many different implementations of C++ are
there now? And it's still shrinking. I venture that a shrinking
implementation base is not good for the long term health of the language.

Hyman Rosen

unread,
Aug 19, 2004, 8:04:33 AM8/19/04
to
Walter wrote:
> "Hyman Rosen" <hyr...@mail.com> wrote

>>And users of the joe class only include joe.h, and never
>>have to worry about joe.c.
>
> They would for templates, since the compiler will need to precompile it at
> some point. You'd still have to put a dependency on joe.c in the makefile,
> etc., since there's now an order to compiling the source files.

If this comes to you from a library provider, they could ship
compiled versions of the implementation file, just as they now
ship object files (assuming the compiler vendor supplied such
support). If it's your own templates, you simply make your
overall project depend on the implementation files if your
vendor uses the freedom given by 14/9 to force you to compile
the implementations first.


> Furthermore, there's no indication to the compiler that the
> template implementation is in joe.c, so some sort of cross
> reference would need building or it'd need to be manually
> specified. That's all doable, of course, and is not that big
> an issue, but I wished to point out that it isn't quite as
> simple as object files are.

How is it different from object files? For normal source files
you must specify which object files are part of your program,
what their source files are, and what other files they depend
on. Some of this is done automatically by various development
platforms, but it is always done. How are exported templates
different?

> But just what are the benefits of separate compilation?
> They are the 3 I mentioned. There is no semantic benefit
> that namespaces can't address.

I will believe this once you agree that C++ should require
definitions of all functions, template or not, to be included
in every compilation unit which calls them. If you do not
agree that this is a good idea for plain functions, I do not
see why it's a good idea for function templates.

> I don't understand why namespaces wouldn't do the job.
> Isn't that kind of problem exactly what namespaces were
> designed to solve?

No. By requiring that implementations be bodily included
where instantiations are needed, the implementations are,
first, subject to the various pollutions of the instantiation
space, not least of which are macros, and secondly, are not
able to use anonymous namespaces since that will break the
ODR. As I said, unless you can convince me that the inclusion
model is good for ordinary methods, I have no reason to believe
that it's good for template methods.


> So, we have, for the user:
> export template foo ...
> v.s.
> #include "foo_implementation.h"
> and they're done in either case.

You fail to see that the second case exposes the implementation
to the vagaries of the instantiation environment while the first
does not. Think of macros if nothing else.

Hyman Rosen

unread,
Aug 19, 2004, 8:09:25 AM8/19/04
to
Thorsten Ottosen wrote:
> ok, but with separate compilation we get faster compilation.
> How much faster will/can the export version be?

Faster by a factor of 3.7. The point is not to have faster
compilation, although that would be nice, but to have cleaner
compilation, so that implementations do not intertwine with
the usage environments any more than required by the lookup
rules.

> Currently is also seriously tedious to implement class
> template member functions outside the class.

Huh? Why? Just because you have to repeat the template
header part? That's not that much more onerous than
repeating ClassName:: in front of ordinary methods.
Make a macro if it's that bothersome. With export, you
won't have to worry about the macros colliding with
client code!

Hyman Rosen

unread,
Aug 19, 2004, 8:10:24 AM8/19/04
to
Andrei Alexandrescu (See Website for Email) wrote:
> What is the few-words good explanation of export?

Here's my attempt:
Because of historical reasons having to do with how templates are
implemented, template methods (and static data members) are
effectively considered inline, and so their definitions must be
included in any compilation unit which requires an instantiation.

The export keyword breaks this requirement. When a template method
is marked as export, its definition does not get included into
compilation units which use it. Instead, it goes into its own source
file(s), and is compiled separately.

The C++ standard permits an implementation to require that the
definition of an exported method or object must be compiled before a
reference to such an object is compiled.

> 2. export failed horribly at doing what was initially supposed to do. I

> believe what it was supposed to do was true separate compilation of templates.

Compilation of templates is obviously not like compilation of ordinary
source, since template instantiation requires information from the
instantiation context and from the template parameters. But what do you
mean by "true" separate compilation? What kind of separate compilation
is not true? I understand that some people wish that export should be a
way to prevent people from examining template implementation code, but
that's hardly something for the standard to worry about. I understand
that some people either think or wish that export should affect how
temnplates are instantiated, but export has nothing to do with that.

To express it as simply as possible, imagine that C++ required that every
function be declared inline, and that therefore the implementation of every
function must be included in any compilation unit that used it. This is the
model that unexported templates labor under, and is what export is designed
to avoid.

Walt Karas

unread,
Aug 19, 2004, 8:25:27 PM8/19/04
to
jerem...@gmail.com (Jeremy Siek) wrote in message news:<21925601.04080...@posting.google.com>...

> CALL FOR PAPERS/PARTICIPATION
>
> C++, Boost, and the Future of C++ Libraries
> Workshop at OOPSLA
> October 24-28, 2004
> Vancouver, British Columbia, Canada
> http://tinyurl.com/4n5pf

I don't have the time to pursue it myself, but maybe someone else might.

I have written a prototype for an alternative approach to generic
ordered containers.

http://www.geocities.com/wkaras/gen_cpp/avl_tree.html

For someone who says: "I have some instances of a type that I want
to keep in order; I don't mind if the instances are copied in and
out of the heap as the means of storing them in order", STL map/set
are what they're looking for.

For someone who says, "I have some 'things' that I want to keep in
order; the 'things' are unique identified by 'handles'; I am willing
to 'palletize' my 'things' so that each one can store the link 'handles'
necessary to form the container; I don't (necessarily) want to copy
the 'things', and I don't want the container to rely on the heap; I'm
willing to provide more 'glue logic' than what's needed when using
map or set", this alternative type of ordered container is what
they're looking for.

This approach has similarities with the approach that relies on
the items to be collected having a base class with the links
needed to form the container. But it is significantly more
flexible.

On the other hand, given the existence of the WWW, I'm not sure
it's worth the effort to add more templates to the standard lib.
when they can easily be implemented in a portable way. It seems like
the standard lib. is becoming like the Academy Awards for good
code, rather than a way of making it easier to write portable
code.

Jerry Coffin

unread,
Aug 19, 2004, 8:32:27 PM8/19/04
to
"Andrei Alexandrescu \(See Website for Email\)"

[ ... ]

> I'd say, a good feature, like a good business idea, can be explained in a
> few words. What is the few-words good explanation of export? (I *am*
> interested.)

>From what I've heard, the good explanation is that it prevented a
civil war in the C++ committee, so without it there might not be a C++
standard at all (or at least it might have been delayed considerably).

As to why people thought they wanted it: so it would be possible to
distribute template code as object files in libraries, much like
non-template code is often distributed. I don't believe that export,
as currently defined, actually supports this though.



> Does it do what it was intended to do? (Again, I *am* interested.)

It's hard to say without certainty of the intent. Assuming my guess
above at the intent was correct, then I'm quite certain it does NOT do
what's intended.

If, OTOH, the idea is that template code is still distributed as
source code, and export merely allows that source code to be compiled
by itself, then it does what was intended, within a limited scope.

OTOH, if that was desired to speed up compilation, then I think it
basically fails -- at least with Comeau C++, compiling the templates
separately doesn't seem to gain much, at least for me (and since I
have no other compilers that support, or even plan to soon support
export, Comeau is about the only one that currently matters).

[ ... ]



> 2. export failed horribly at doing what was initially supposed to do. I
> believe what it was supposed to do was true (not "when"s and "cough"s and
> "um"s) separate compilation of templates. Admittedly, gaffes in other areas
> of language design are at fault for that failure. Correct me if I'm wrong.

I don't know how much is true gaffes, and how much the simple fact
that templates are enough different from normal code that what was
expected was simply (at least extremely close to) impossible.



> 3. export failed miserably at being reasonably easy to implement.

I can hardly imagine how anybody could argue that one.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Thorsten Ottosen

unread,
Aug 19, 2004, 9:21:04 PM8/19/04
to
"Hyman Rosen" <hyr...@mail.com> wrote in message news:10928618...@master.nyc.kbcfp.com...

| > Currently is also seriously tedious to implement class
| > template member functions outside the class.
|
| Huh? Why? Just because you have to repeat the template
| header part?

yes.

| That's not that much more onerous than
| repeating ClassName:: in front of ordinary methods.

many templates have several parameters; then they might have templated member functions.
This get *very* tedious to define outside a class.

br

Thorsten

Daveed Vandevoorde

unread,
Aug 19, 2004, 9:31:41 PM8/19/04
to
"Andrei Alexandrescu wrote:
> "Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
> news:52f2f9cd.04081...@posting.google.com...
> > "Andrei Alexandrescu wrote:
> > [...]
> > > Maybe "export" which is so broken and so useless and so abusive that its
> > > implementers have developed Stockholm syndrome during the long years
> > > that
> > > took them to implement it?
> >
> > How is "export" useless and broken?
> >
> > Have you used it for any project? I find it very pleasant
> > to work with in practice.
>
> Haven't used export, and not because I didn't wanna.

What has prevented you from at least trying it? An affordable
implementation has been available for well over a year.

Without doing so, I fail to see how you can objectively make the
assertions you made.

> [Whom do you think I referred to when mentioning the Stockholm syndrome?
> :o)]

Adding a smiley to an innapropriate remark does not make it
any more appropriate.

> I'd say, a good feature, like a good business idea, can be explained in a
> few words. What is the few-words good explanation of export? (I *am*
> interested.)

It allows you to separate a function, member function, or static data
member implementation ("definition") in a single translation unit.

> In addition, a good programming language feature does what it was intended
> to do (plus some other neat things :o)), and can be reasonably implemented.

To clarify: That is "good" in your personal point of view.

The intent of the feature was to protect template definitions from
"name leakage" (I think that's the term that was used at the time;
it refers to picking up unwanted declaration due to excessive
#inclusion). export certainly fulfills that.

export also allows code to be compiled faster. (I'm seeing gains
without even using an export-aware back end.)

export also allows the distribution of templates in compiled form
(as opposed to source form).

> I think we all agree "export" fell the last test.

export was hard to implement for us, no doubt.

> Does it do what it was intended to do? (Again, I *am* interested.)

Yes, and more. See above.

> A summary of what's the deal with export would be of great help to at least
> myself, so I'd be indebted to anyone who'd give me one. For full disclosure,
> my current perception is:
>
> 1. It's hard to give a good account of what export does in a few words, at
> least an account that's impressive.

Impressive is in the eye of the beholder. Whenever I use the feature,
I'm impressed that it works so smoothly.

> 2. export failed horribly at doing what was initially supposed to do. I
> believe what it was supposed to do was true (not "when"s and "cough"s and
> "um"s) separate compilation of templates. Admittedly, gaffes in other areas
> of language design are at fault for that failure. Correct me if I'm wrong.

How does it fail at separate compilation?

> 3. export failed miserably at being reasonably easy to implement.

While it is true that it was hard to implement for EDG (I am not aware of
anyone else having even tried), it was never claimed by the proponents
that it would be easy to implement.

After EDG implemented export, Stroustrup once asked what change to
C++ might simplify its implementation without giving up on the separate
compilation aspect of it. I couldn't come up with anything other than the
very drastic notion of making the language 100% modular (i.e., every entity
can be declared in but one place). That doesn't mean that a template
separation model is not desirable.

> Combined with 1 and 2, I can only say: at the very best, export is a Pyrrhic
> victory.

The history C++ "export" feature may well be the very incarnation of irony.
However, I don't think there is a matter of "victory" here.

I contend that, all other things being equal, export templates are more
pleasant to work with than the equivalent inclusion templates. That by
itself is sufficient to cast doubt on your claim that the feature is "broken
and useless."

Daveed

Gabriel Dos Reis

unread,
Aug 19, 2004, 9:38:41 PM8/19/04
to
Hyman Rosen <hyr...@mail.com> writes:

| Andrei Alexandrescu (See Website for Email) wrote:
| > What is the few-words good explanation of export?

Do those few-words need to be technical or are they marketing purpose?
This is a genuine question, as I suspect that too much hype and
marketing have been pushed against export. That impression I got was
not cleared up after discussion of a well kown paper.

| Here's my attempt:
| Because of historical reasons having to do with how templates are
| implemented, template methods (and static data members) are
| effectively considered inline, and so their definitions must be
| included in any compilation unit which requires an instantiation.

I think that use of "inline" is unfortunate. I don't think that
description accurately covers what CFront did and other historical
repository-based instantiations (like in old Sun CC).

Export is the result of a compromise. A compromise between tenants of
inclusion model only and tenants of separate compilation of templates.

--
Gabriel Dos Reis
g...@integrable-solutions.net

Walter

unread,
Aug 19, 2004, 9:39:30 PM8/19/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10928643...@master.nyc.kbcfp.com...

> To express it as simply as possible, imagine that C++ required that every
> function be declared inline, and that therefore the implementation of
every
> function must be included in any compilation unit that used it. This is
the
> model that unexported templates labor under, and is what export is
designed
> to avoid.

In the D programming language, all functions are potentially inline (at the
compiler's discretion), and so all the function bodies are available, even
though it follows the separate compilation model. All the programmer does is
use the statement:

import foo;

and the entire semantic content of foo.d is available to the compiler,
including whatever template and function bodies are in foo.d. So, is this a
burden the compiler labors under? Not that anyone has noticed, it compiles
code at a far faster rate than a C++ compiler can. I can go into the reasons
why if anyone is interested.

But back to what can be done with C++. Many compilers implement precompiled
headers, which do effectively address this problem reasonably well. Export
was simply not needed to speed up compilation, and for those who don't
believe me, I stand by my challenge to benchmark Digital Mars C++ with
precompiled headers against any export template implementation for project
build speed.

Walter

unread,
Aug 19, 2004, 9:41:05 PM8/19/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10928607...@master.nyc.kbcfp.com...

> Walter wrote:
> > "Hyman Rosen" <hyr...@mail.com> wrote
> >>And users of the joe class only include joe.h, and never
> >>have to worry about joe.c.
> >
> > They would for templates, since the compiler will need to precompile it
at
> > some point. You'd still have to put a dependency on joe.c in the
makefile,
> > etc., since there's now an order to compiling the source files.
>
> If this comes to you from a library provider, they could ship
> compiled versions of the implementation file, just as they now
> ship object files (assuming the compiler vendor supplied such
> support). If it's your own templates, you simply make your
> overall project depend on the implementation files if your
> vendor uses the freedom given by 14/9 to force you to compile
> the implementations first.

I agree it's not a big problem, it just isn't as simple as not having to
worry about joe.c <g>.

> > Furthermore, there's no indication to the compiler that the
> > template implementation is in joe.c, so some sort of cross
> > reference would need building or it'd need to be manually
> > specified. That's all doable, of course, and is not that big
> > an issue, but I wished to point out that it isn't quite as
> > simple as object files are.
> How is it different from object files? For normal source files
> you must specify which object files are part of your program,
> what their source files are, and what other files they depend
> on. Some of this is done automatically by various development
> platforms, but it is always done. How are exported templates
> different?

The cross reference database is what the librarian does <g>. And again, I
agree that this is not a huge problem, but it is a problem and it does, for
example, impose an order on the compilations that was not required before.
But I'll also say that, as a compiler vendor, one of the most common tech
questions I get is "I am getting an undefined symbol message from the
linker, what do I do now?" I imagine this would be worse with template
interdependencies, but I could be wrong.


> > But just what are the benefits of separate compilation?
> > They are the 3 I mentioned. There is no semantic benefit
> > that namespaces can't address.
> I will believe this once you agree that C++ should require
> definitions of all functions, template or not, to be included
> in every compilation unit which calls them. If you do not
> agree that this is a good idea for plain functions, I do not
> see why it's a good idea for function templates.

The D language does this, and it works fine. I've also heard of C++
compilers that do cross-module optimization that do this as well. Good idea
or not, it is doable with far less effort than export. But I emphasize that
the way the C++ language was designed makes it *easy* to implement separate
compilation for functions. That same design makes it *very hard* to do
separate compilation for templates, no matter how conceptually the same we
might wish them to be. If C++ had the concept of modules (rather than its
focus on source text), this would not be such a big problem. And it is not
enough that an idea be just a good idea, its advantages must outweigh the
costs. The costs of export are enormous, and the corresponding enormous gain
just isn't there.


> > I don't understand why namespaces wouldn't do the job.
> > Isn't that kind of problem exactly what namespaces were
> > designed to solve?
> No. By requiring that implementations be bodily included
> where instantiations are needed, the implementations are,
> first, subject to the various pollutions of the instantiation
> space, not least of which are macros, and secondly, are not
> able to use anonymous namespaces since that will break the
> ODR.

Why not use a named namespace?

> > So, we have, for the user:
> > export template foo ...
> > v.s.
> > #include "foo_implementation.h"
> > and they're done in either case.
>
> You fail to see that the second case exposes the implementation
> to the vagaries of the instantiation environment while the first
> does not. Think of macros if nothing else.

I'll agree on the macro issue, but nothing else <g>. And to repeat what I
said earlier about the macro issue, export seems to be an awfully expensive
solution to the macro scoping problem, especially since the macro pollution
issue still remains. Wouldn't it be better to come at the macro problem
head on?

Andrei Alexandrescu (See Website for Email)

unread,
Aug 20, 2004, 6:03:20 AM8/20/04
to
"Hyman Rosen" <hyr...@mail.com> wrote in message
news:10928643...@master.nyc.kbcfp.com...

> Andrei Alexandrescu (See Website for Email) wrote:
> > What is the few-words good explanation of export?
>
> Here's my attempt:
> Because of historical reasons having to do with how templates are
> implemented, template methods (and static data members) are
> effectively considered inline, and so their definitions must be
> included in any compilation unit which requires an instantiation.
>
> The export keyword breaks this requirement. When a template method
> is marked as export, its definition does not get included into
> compilation units which use it. Instead, it goes into its own source
> file(s), and is compiled separately.
>
> The C++ standard permits an implementation to require that the
> definition of an exported method or object must be compiled before a
> reference to such an object is compiled.

Thanks.

> > 2. export failed horribly at doing what was initially supposed to do. I
> > believe what it was supposed to do was true separate compilation of
> > templates.
>
> Compilation of templates is obviously not like compilation of ordinary
> source, since template instantiation requires information from the
> instantiation context and from the template parameters. But what do you
> mean by "true" separate compilation? What kind of separate compilation
> is not true?

By "true" separate compilation I understand the dependency gains that
separate compilation achieves. That is crucial, and whole projects can be
organized around that idea. It translates into what needs be compiled when
something is touched, with the expected build speed and stability tradeoffs.

Templates cannot be meaningfully typechecked during their compilation. They
also cause complications when instantiated in different contexts. That makes
them unsuitable for "true" separate compilation. Slapping a keyword that
makes them appear as separately compilable while the true inside the
compilation system guts is different (in terms of dependency management and
compilation speed) didn't help.

> I understand that some people wish that export should be a
> way to prevent people from examining template implementation code, but
> that's hardly something for the standard to worry about.

I consider that a secondary issue.

> To express it as simply as possible, imagine that C++ required that every
> function be declared inline, and that therefore the implementation of
> every
> function must be included in any compilation unit that used it. This is
> the
> model that unexported templates labor under, and is what export is
> designed
> to avoid.

I don't think that that's a good parallel. Because the non-inline functions
can be typechecked and compiled to executable code in separation. Templates
cannot.


Andrei

Andrei Alexandrescu (See Website for Email)

unread,
Aug 20, 2004, 6:04:04 AM8/20/04
to
"Jerry Coffin" <jco...@taeus.com> wrote in message
news:b2e4b04.04081...@posting.google.com...

>> 2. export failed horribly at doing what was initially supposed to do. I
>> believe what it was supposed to do was true (not "when"s and "cough"s and
>> "um"s) separate compilation of templates. Admittedly, gaffes in other
>> areas
>> of language design are at fault for that failure. Correct me if I'm
>> wrong.
>
> I don't know how much is true gaffes, and how much the simple fact
> that templates are enough different from normal code that what was
> expected was simply (at least extremely close to) impossible.

Fundamentally templates are sensitive to the point where they are
instantiated, and not on the parameters alone. That's a classic PL design
mistake because it undermines modularity. The effects of such a mistake are
easily visible :o). They've been visible in a couple of early languages as
well.

Andrei

Hyman Rosen

unread,
Aug 20, 2004, 6:16:00 AM8/20/04
to
Walter wrote:
> That same design makes it *very hard* to do separate compilation for templates,
> no matter how conceptually the same we might wish them to be.

Just because it's hard for compiler makers doesn't mean it's hard for
compiler users. Indeed, I find the inclusion model for templates to
be difficult and annoying and find export to be perfectly intuitive.

> its advantages must outweigh the costs. The costs of export are enormous, and the
> corresponding enormous gain just isn't there.

Export imposes no cost at all on programmers. It's a simple concept and simple to use.
Compiler vendors for the most part can't be bothered to implement it, because everyone
uses the inclusion model because they have to.

> Why not use a named namespace?

Because then the inclusion model would violate the ODR, unless I prepare *another*
source file with the implementation of those gloabl functions that my template
methods want to call. Why should I jump through hoops when the standard give me a
perfectly simple way not to?

Gabriel Dos Reis

unread,
Aug 20, 2004, 9:43:23 AM8/20/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

| "Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
| news:52f2f9cd.04081...@posting.google.com...
| > "Andrei Alexandrescu wrote:
| > [...]
| > > Maybe "export" which is so broken and so useless and so abusive that its
| > > implementers have developed Stockholm syndrome during the long years
| > > that
| > > took them to implement it?
| >
| > How is "export" useless and broken?
| >
| > Have you used it for any project? I find it very pleasant
| > to work with in practice.
|
| Haven't used export, and not because I didn't wanna.

Very interesting.

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website for Email)

unread,
Aug 20, 2004, 9:47:41 AM8/20/04
to
"Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
news:52f2f9cd.04081...@posting.google.com...
> "Andrei Alexandrescu wrote:
> Without doing so, I fail to see how you can objectively make the
> assertions you made.
>
>> [Whom do you think I referred to when mentioning the Stockholm syndrome?
>> :o)]
>
> Adding a smiley to an innapropriate remark does not make it
> any more appropriate.

Sorry you found it inappropriate. I thought of emailing a private excuse,
but excuses must be made in the front of those who witnessed the offense.
So - I am sorry; I had found the comparison funny and meant it without harm.

I think what I need to do is go use the feature, and then make noise when
I'm more based...


Andrei

Hyman Rosen

unread,
Aug 20, 2004, 9:49:02 AM8/20/04
to
Jerry Coffin wrote:
> I don't believe that export, as currently defined, actually supports this though.

Why don't you believe it? What is it about export that you think would prevent
a compiler vendor from doing just that? Obviously compiled templates act as
further input to the "compiler" rather than to the "linker" (assuming those
distinctions exist), but what of that?

>>3. export failed miserably at being reasonably easy to implement.
> I can hardly imagine how anybody could argue that one.

Who said it was supposed to be easy? Templates aren't easy, exceptions
aren't easy, manipulating vtable pointers during construction isn't easy.

Hyman Rosen

unread,
Aug 20, 2004, 9:49:46 AM8/20/04
to
Thorsten Ottosen wrote:
> many templates have several parameters; then they might have templated member functions.
> This get *very* tedious to define outside a class.

Like I said, just use a macro. Combine this with export,
so that the macro is hidden in the implementation file.

Hyman Rosen

unread,
Aug 20, 2004, 9:50:52 AM8/20/04
to
Gabriel Dos Reis wrote:
> I think that use of "inline" is unfortunate. I don't think that
> description accurately covers what CFront did and other historical
> repository-based instantiations (like in old Sun CC).

That's not relevant to the language defined by the standard.
In standard C++, either a template method is marked by export,
or it must be included in every compilation unit that would
instantiate it. That is exactly the model of inline - an inline
method must be defined in every compilation unit that calls it.

> Export is the result of a compromise. A compromise between tenants of
> inclusion model only and tenants of separate compilation of templates.

Every aspect of the language has interesting historical background
that is largely irrelevant to undertanding how to use it. I think
export just got a bum rap because no one bothered to implement it
because you could get along without it and there were other fish to
fry.

Hyman Rosen

unread,
Aug 20, 2004, 9:51:13 AM8/20/04
to
Walter wrote:
> Export was simply not needed to speed up compilation

That's fine, because that's not what it's for.

Andrei Alexandrescu (See Website for Email)

unread,
Aug 20, 2004, 11:27:41 PM8/20/04
to
"Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
news:m31xi2c...@uniton.integrable-solutions.net...

> "Andrei Alexandrescu \(See Website for Email\)"
> <SeeWebsit...@moderncppdesign.com> writes:
>
> | "Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
> | news:52f2f9cd.04081...@posting.google.com...
> | > "Andrei Alexandrescu wrote:
> | > [...]
> | > > Maybe "export" which is so broken and so useless and so abusive that
> its
> | > > implementers have developed Stockholm syndrome during the long years
> | > > that
> | > > took them to implement it?
> | >
> | > How is "export" useless and broken?
> | >
> | > Have you used it for any project? I find it very pleasant
> | > to work with in practice.
> |
> | Haven't used export, and not because I didn't wanna.
>
> Very interesting.

Well, it's very banal really. I just only had the chance to use compilers
that don't implement export.

Andrei

Gabriel Dos Reis

unread,
Aug 20, 2004, 11:30:36 PM8/20/04
to
Hyman Rosen <hyr...@mail.com> writes:

| Gabriel Dos Reis wrote:
| > I think that use of "inline" is unfortunate. I don't think that
| > description accurately covers what CFront did and other historical
| > repository-based instantiations (like in old Sun CC).
|
| That's not relevant to the language defined by the standard.

Here is a piece that disappeared in your reply and that makes the
above statement the most relevant as a reply to your earlier message:

Because of historical reasons having to do with how templates are
implemented, template methods (and static data members) are
effectively considered inline, and so their definitions must be
included in any compilation unit which requires an instantiation.

If you believe "historical reasons" to be not relevant to the language
defined by the standard, why did you bring them in ther first place?

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Hyman Rosen

unread,
Aug 20, 2004, 11:33:36 PM8/20/04
to
Andrei Alexandrescu (See Website for Email) wrote:
> Templates cannot be meaningfully typechecked during their compilation. They
> also cause complications when instantiated in different contexts. That makes
> them unsuitable for "true" separate compilation. Slapping a keyword that
> makes them appear as separately compilable while the true inside the
> compilation system guts is different (in terms of dependency management and
> compilation speed) didn't help.

There have been environments in which the linker did whole-program
optimization, inlining routines out of object files into the call
sites. I think you are mistaken in concept when you try to peer under
the hood of the compiler to call some of its operations "true" and
some not. Step back and think of nothing but the point of view of
the user. Also consider that while extremely complex cases of
instantiation must be handled correctly by the compiler, most actual
templates that people write don't have weird name-capturing problems.

The export user sees method implementations compiled separately in an
implementation source file, and declarations included by header in
compilation units that need the methods. For the user, this appears
to work just like non-template code.

> I don't think that that's a good parallel. Because the non-inline functions
> can be typechecked and compiled to executable code in separation. Templates
> cannot.

You are thinking of this from a compiler writer's perspective.
From a user's perspective, he submits the code to a compiler
and gets an executable out. Without export, the implementation
must be included in the places where it's called, subjecting it
to the vagaries of that environment. That's a pain.

Jerry Coffin

unread,
Aug 20, 2004, 11:47:59 PM8/20/04
to
Hyman Rosen <hyr...@mail.com> wrote in message news:<YEfVc.7145$si.4688@trndny06>...

> Jerry Coffin wrote:
> > I don't believe that export, as currently defined, actually supports this though.
>
> Why don't you believe it?

Because I see the same things you seem to want to ignore.

> What is it about export that you think would prevent
> a compiler vendor from doing just that? Obviously compiled templates act as
> further input to the "compiler" rather than to the "linker" (assuming those
> distinctions exist), but what of that?

In the end, such distinctions DO exist, and they exist for what many
people clearly consider very good reasons -- specifically that they
provide real benefits. I believe the expectation of most people was
that export would provide the same or similar benefits, but I don't
believe they do so.

Worse, your defense of export seems backward to me: you claim to be
recieving the benefits of export even with a backend that doesn't
support it. That sounds to me like the benefits don't really come from
export at all.

> >>3. export failed miserably at being reasonably easy to implement.
> > I can hardly imagine how anybody could argue that one.
>
> Who said it was supposed to be easy? Templates aren't easy, exceptions
> aren't easy, manipulating vtable pointers during construction isn't easy.

The wording was "reasonably easy", not just easy.

Reasonable in this sort of usage normally implies one or both of two
things: 1) that it's roughly in line with what was expectd, and 2)
that the benefits are sufficient to justify the cost.

I'm not on the committee myself, but I've certainly conversed with a
number of committee members, and ALL of them I've talked to have
admitted that export has turned out to be substantially more difficult
to implement than was expected.

So far, few people who've really used export seem to have found
tremendous benefits to it either.

By contrast, nobody seems to have been terribly surprised by the
amount of work it has taken to implement exception handling, and most
people seem to believe that its benefits are justify the work.

The area where exception handling may have surprised some people
and/or had unjustified costs is not in its implementation, but the
burden for exception safety that's placed on the user. Though few
people seem to be aware of it (yet), export has a similar effect -- it
affects name lookup in ways most people don't seem to expect, and can
substantially increase the burden on the user.

So, if you'd prefer, I'd rephrase the question: as I recall, it's
claimed that EDG put something like three man-years of labor into
implementing export, and that's not all it takes to implement it
either. So far even if we take ALL the users into account, I've yet to
see an indication that anybody has saved three man-years of labor
because export was there. That seems to indicate that at this point,
export is a net loss.

The next obvious question would be at what point export breaks even.
At this point, the benefits are sufficiently tenuous that I'm not at
all sure anybody can really guess at the time, or even state with any
certainty that there will ever BE such a time.

--
Later,
Jerry.

The universe is a figment of its own imagination.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Aug 20, 2004, 11:51:50 PM8/20/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:TWeVc.7135$si.3217@trndny06...

> Walter wrote:
> > That same design makes it *very hard* to do separate compilation for
templates,
> > no matter how conceptually the same we might wish them to be.
> Just because it's hard for compiler makers doesn't mean it's hard for
> compiler users. Indeed, I find the inclusion model for templates to
> be difficult and annoying and find export to be perfectly intuitive.

I find:
import foo;
to be far more natural and intuitive than:
#include "foo.h"
(and I don't even have to write a .h file), but the inclusion model is what
C++ is, and export doesn't fix that. If C++ had gone a step further than
export, and offered true modules, I suspect it would have been a lot easier
to implement.

> > its advantages must outweigh the costs. The costs of export are
enormous, and the
> > corresponding enormous gain just isn't there.
> Export imposes no cost at all on programmers. It's a simple concept and
simple to use.

Oh, the costs imposed on programmers are very high. The cost is in terms of
years of delay, deferring of the arrival of other improvements, shrinking
the number of independent C++ implementations, etc.

> Compiler vendors for the most part can't be bothered to implement it,
because everyone
> uses the inclusion model because they have to.

The reason why is it takes 2 to 3 man years to implement. This wouldn't be
an issue at all if it were just a matter of bothering to implement it.

> > Why not use a named namespace?
> Because then the inclusion model would violate the ODR, unless I prepare
*another*
> source file with the implementation of those gloabl functions that my
template
> methods want to call.

I must apologize, because I'm just not getting why namespaces won't work or
what this has to do with the ODR.

> Why should I jump through hoops when the standard give me a perfectly
simple way not to?

If export is the only improvement you want from existing C++ compilers,
that's fine. But I find it hard to believe there aren't other things you'd
be interested in having, like better templates, fewer bugs, better
optimization, better libraries, a lower price, better support, nicer IDE,
better debuggers, etc. What are you willing to give up to get export, and
what have you given up for it?

Walter

unread,
Aug 20, 2004, 11:52:50 PM8/20/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:PafVc.7138$si.4782@trndny06...

> Walter wrote:
> > Export was simply not needed to speed up compilation
>
> That's fine, because that's not what it's for.

I apologize, I assumed that's what you meant when you talked about
"laboring" under the inclusion model. It's repeatedly listed as one of the 3
justifications for exports, and not just by myself. Daveed Vandevoorde
listed them in his post in this thread:

---------------------------------------------------------------------


The intent of the feature was to protect template definitions from
"name leakage" (I think that's the term that was used at the time;
it refers to picking up unwanted declaration due to excessive
#inclusion). export certainly fulfills that.

export also allows code to be compiled faster. (I'm seeing gains
without even using an export-aware back end.)

export also allows the distribution of templates in compiled form
(as opposed to source form).

----------------------------------------------------------------------

Walter

unread,
Aug 20, 2004, 11:53:14 PM8/20/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:YEfVc.7145$si.4688@trndny06...

> >>3. export failed miserably at being reasonably easy to implement.
> > I can hardly imagine how anybody could argue that one.
>
> Who said it was supposed to be easy? Templates aren't easy, exceptions
> aren't easy, manipulating vtable pointers during construction isn't easy.

2 to 3 man years to implement is quite another level of "easy". It's
extremely costly to implement, costly to users as well as vendors.

tom_usenet

unread,
Aug 20, 2004, 11:56:52 PM8/20/04
to
On 19 Aug 2004 21:31:41 -0400, goo...@vandevoorde.com (Daveed
Vandevoorde) wrote:

>export also allows the distribution of templates in compiled form
>(as opposed to source form).

This I would love to hear about. What do compiled templates look like?
I hope commercial pressures don't prevent you from replying.

Are compiled templates easily decompiled (assuming the file format is
not obscure)? How much source information can be thrown away in
compiling them?

>After EDG implemented export, Stroustrup once asked what change to
>C++ might simplify its implementation without giving up on the separate
>compilation aspect of it. I couldn't come up with anything other than the
>very drastic notion of making the language 100% modular (i.e., every entity
>can be declared in but one place).

That is exactly the same conclusion that I have reached (intuitively
rather than through experience or careful working through of the
problem); separate compilation of templates within the usual C++ TU
model pretty much leads you to two phase name lookup and export.

Tom

tom_usenet

unread,
Aug 20, 2004, 11:57:17 PM8/20/04
to
On 20 Aug 2004 06:04:04 -0400, "Andrei Alexandrescu \(See Website for
Email\)" <SeeWebsit...@moderncppdesign.com> wrote:

>"Jerry Coffin" <jco...@taeus.com> wrote in message
>news:b2e4b04.04081...@posting.google.com...
> >> 2. export failed horribly at doing what was initially supposed to do. I
> >> believe what it was supposed to do was true (not "when"s and "cough"s and
> >> "um"s) separate compilation of templates. Admittedly, gaffes in other
> >> areas
> >> of language design are at fault for that failure. Correct me if I'm
> >> wrong.
> >
> > I don't know how much is true gaffes, and how much the simple fact
> > that templates are enough different from normal code that what was
> > expected was simply (at least extremely close to) impossible.
>
>Fundamentally templates are sensitive to the point where they are
>instantiated, and not on the parameters alone. That's a classic PL design
>mistake because it undermines modularity. The effects of such a mistake are
>easily visible :o). They've been visible in a couple of early languages as
>well.

The alternative is to simply not have any kind of separate compilation
of templates. So we could drop two phase name lookup, "template" and
"typename" disambiguators, and possibly some other nasty features,
basically moving to the basic template inclusion model implemented by
old Borland and Microsoft compilers (except without the bugs!)

Hmm. It was simpler back then; I think two phase name lookup is still
extremely badly understood, as are the merits and otherwise of export.
(How many people realise that point-of-instantiation lookup of
function names uses ADL only, and not ordinary lookup?)

Tom

Gabriel Dos Reis

unread,
Aug 21, 2004, 12:04:52 AM8/21/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

| "Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
| news:m31xi2c...@uniton.integrable-solutions.net...
| > "Andrei Alexandrescu \(See Website for Email\)"
| > <SeeWebsit...@moderncppdesign.com> writes:
| >
| > | "Daveed Vandevoorde" <goo...@vandevoorde.com> wrote in message
| > | news:52f2f9cd.04081...@posting.google.com...
| > | > "Andrei Alexandrescu wrote:
| > | > [...]
| > | > > Maybe "export" which is so broken and so useless and so abusive that
| > its
| > | > > implementers have developed Stockholm syndrome during the long years
| > | > > that
| > | > > took them to implement it?
| > | >
| > | > How is "export" useless and broken?
| > | >
| > | > Have you used it for any project? I find it very pleasant
| > | > to work with in practice.
| > |
| > | Haven't used export, and not because I didn't wanna.
| >
| > Very interesting.
|
| Well, it's very banal really. I just only had the chance to use compilers
| that don't implement export.

What I found very interesting is not the fact that you used compilers
that don't implement export. What I found interesting is that you
made such strong statements based on no actual experience, as you
confessed. I'm not saying it is bad. Just very interesting.

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Gabriel Dos Reis

unread,
Aug 21, 2004, 6:15:27 AM8/21/04
to
tom_usenet <tom_u...@hotmail.com> writes:

[...]

| Hmm. It was simpler back then; I think two phase name lookup is still
| extremely badly understood, as are the merits and otherwise of export.

I just wanted to remind that two-phase name lookup has been being
discussed looong before "export" came into the picture.

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Aug 21, 2004, 6:21:46 AM8/21/04
to

"tom_usenet" <tom_u...@hotmail.com> wrote in message
news:7p8ci0lh7ifdlk04l...@4ax.com...

> The alternative is to simply not have any kind of separate compilation
> of templates. So we could drop two phase name lookup, "template" and
> "typename" disambiguators, and possibly some other nasty features,
> basically moving to the basic template inclusion model implemented by
> old Borland and Microsoft compilers (except without the bugs!)
>
> Hmm. It was simpler back then; I think two phase name lookup is still
> extremely badly understood, as are the merits and otherwise of export.
> (How many people realise that point-of-instantiation lookup of
> function names uses ADL only, and not ordinary lookup?)

How it works in D is pretty simple. The template arguments are looked up in
the context of the point of instantiation. The symbols inside the template
body are looked up in the context of the point of definition. It's how you'd
intuitively expect it to work, as it works analogously to ordinary function
calls.

I say "point" of instantiation/definition, but since D symbols can be
forward referenced, it's more accurate to say "scope" of
instantiation/definition. Isn't it odd that C++ class scopes can forward
reference member symbols, but that doesn't work at file scope?

Rob Williscroft

unread,
Aug 21, 2004, 12:00:50 PM8/21/04
to
tom_usenet wrote in news:7p8ci0lh7ifdlk04l...@4ax.com in
comp.lang.c++:

>
> The alternative is to simply not have any kind of separate compilation
> of templates.

> So we could drop two phase name lookup,

2 phase lookup, allows us to write determinate code (*), I really don't
see what it has to do with seperate compilation, if anything, its a
feature that makes export harder (the exported code wouldn't need
to remeber the declaration context).

*) By this I mean code that does what the author of the code
intended and is subject to the minimum possible reinterpretation
at the point of instantiation.

> "template" and

Which "template", do you mean the .template, if so this is a
parsing problem, the parser preferes < to mean "less-than" over
"template-argument-list", It could be solved with some bactraking
(I think :), but we would loose: "if ( 0 < i > 0 ) ... ".

Or do you want code that is sometimes a member template or sometimes
a non template member ?

> "typename" disambiguators,

Again AIUI typename is about writing determinate code.

> and possibly some other nasty features,
> basically moving to the basic template inclusion model implemented by
> old Borland and Microsoft compilers (except without the bugs!)
>
> Hmm. It was simpler back then; I think two phase name lookup is still
> extremely badly understood, as are the merits and otherwise of export.

It was simpler, but also very confusing, the behaviour of you code
depended on when you instantiated you templates. Everyting worked untill
it it stoped working and then things just got strange (atleast thats how
I remember it :).


> (How many people realise that point-of-instantiation lookup of
> function names uses ADL only, and not ordinary lookup?)

Not enough, but we haven't had meny compilers that get this right,
so I think that will change.

Rob.
--
http://www.victim-prime.dsl.pipex.com/

Gabriel Dos Reis

unread,
Aug 21, 2004, 12:03:34 PM8/21/04
to
"Walter" <wal...@digitalmars.nospamm.com> writes:

[...]

| I say "point" of instantiation/definition, but since D symbols can be
| forward referenced, it's more accurate to say "scope" of
| instantiation/definition. Isn't it odd that C++ class scopes can forward
| reference member symbols, but that doesn't work at file scope?

Why should I find that odd?

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Aug 21, 2004, 12:05:14 PM8/21/04
to
Hyman Rosen <hyr...@mail.com> writes:

> Andrei Alexandrescu (See Website for Email) wrote:
>> Templates cannot be meaningfully typechecked during their compilation. They
>> also cause complications when instantiated in different contexts. That makes
>> them unsuitable for "true" separate compilation. Slapping a keyword that
>> makes them appear as separately compilable while the true inside the
>> compilation system guts is different (in terms of dependency management and
>> compilation speed) didn't help.
>
> There have been environments in which the linker did whole-program
> optimization, inlining routines out of object files into the call
> sites. I think you are mistaken in concept when you try to peer under
> the hood of the compiler to call some of its operations "true" and
> some not. Step back and think of nothing but the point of view of
> the user. Also consider that while extremely complex cases of
> instantiation must be handled correctly by the compiler, most actual
> templates that people write don't have weird name-capturing problems.

Doesn't that tend to argue that export isn't solving a real problem?

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

Walter

unread,
Aug 21, 2004, 11:08:35 PM8/21/04
to

"Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
news:m3pt5ke...@uniton.integrable-solutions.net...

> "Walter" <wal...@digitalmars.nospamm.com> writes:
>
> [...]
>
> | I say "point" of instantiation/definition, but since D symbols can be
> | forward referenced, it's more accurate to say "scope" of
> | instantiation/definition. Isn't it odd that C++ class scopes can forward
> | reference member symbols, but that doesn't work at file scope?
>
> Why should I find that odd?

It's inconsistent. If I can do:

class Foo
{ int abc() { return x; } // fwd reference ok
int x;
}

why can't I do in C++:

int abc() { return x; } // error, x is undefined
int x;

?? (You can do that in D.) Furthermore, if forward references worked at file
scope, some of the confusing arcana of template instantiation lookup rules
would be unnecessary.

Paul Mensonides

unread,
Aug 21, 2004, 11:30:57 PM8/21/04
to
Andrei Alexandrescu (See Website for Email) wrote:

>> Definitely. It is a wholly different language.
>
> Not it only remains for me to convince you that that's a disadvantage
> :o).

I'll rephrase slightly. The preprocessor (macro expansion in particular) is a
wholly different language. The primitives used are low-level library elements
used to implement the solution directly. It is possible to make the resulting
syntax cleaner by using higher-level constructs.

That said, along some lines I agree that having a different language is a
disadvantage. Along others I don't. A different language exercises the brain
and promotes new ways of doing things.

> I disagree it's only syntactic cleanliness. Lack of syntactic
> cleanliness is the CHAOS_PP_ that you need to prepend to most of your
> library's symbols. But let me pull the code again:
>
> #define REPEAT(count, macro, data) \
> REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
> /**/
> #define REPEAT_S(s, count, macro, data) \
> REPEAT_I( \
> CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
> count, macro, data \
> ) \
> /**/
> #define REPEAT_INDIRECT() REPEAT_I
> #define REPEAT_I(_, s, count, macro, data) \
> CHAOS_PP_WHEN _(count)( \
> CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
> CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
> CHAOS_PP_DEC(count), macro, data \
> )) \
> macro _(s, CHAOS_PP_DEC(count), data) \
> ) \
> /**/
>
> As far as I understand, REPEAT, REPEAT_S, REPEAT_INDIRECT, REPEAT_I,
> and the out-of-sight CHAOS_PP_STATE, CHAOS_PP_OBSTRUCT,
> CHAOS_PP_EXPR_S are dealing with the preprocessor alone and have zero
> relevance to the task.

First, REPEAT and REPEAT_S are interfaces, one being lower-level than the other.
REPEAT_INDIRECT and REPEAT_I are implementation macros of the REPEAT interface.
REPEAT_INDIRECT isn't even necessary, I used it to be clearer. CHAOS_PP_STATE,
CHAOS_PP_OBSTRUCT, and CHAOS_PP_EXPR_S are primitives used to implement
bootstrapped recursion.

> The others implement an idiom for looping that
> I'm sure one can learn, but is far from familiar to a C++ programmer.

Yes, it is far from familiar--which is both good and bad.

> To say that that's just a syntactic cleanliness thing is a bit of a
> stretch IMHO. By the same argument, any Turing complete language will
> do at the cost of "some" syntactic cleanliness.

For the most part, the difference is syntactic cleanliness. Without the
boilerplate required for recursion, the primary implementation macro becomes:

#define REPEAT_I(count, macro, data) \
CHAOS_PP_WHEN(count)( \
REPEAT_I(CHAOS_PP_DEC(count), macro, data) \
macro(CHAOS_PP_DEC(count), data) \
) \
/**/

Which is really not much different than any higher-order construct:

(defun repeat (count function data)
(unless (zerop count)
(repeat (1- count) function data)
(funcall function (1- count) data)))

; e.g.
(repeat 10
#'(lambda (count data)
(unless (zerop count)
(format t ", "))
(format t "~a~a" data count))
"class T")

>> But the preprocessor *is* adequate for the task. It just isn't as
>> syntactically
>> clean as you'd like it to be.
>
> I maintain my opinion that we're talking about more than syntactic
> cleanliness here. I didn't say the preprocessor is "incapable" for
> the task. But I do believe (and your code strengthened my belief)
> that it is "inadequate". Now I looked on www.m-w.com and I saw that
> inadequate means "
>> not adequate : INSUFFICIENT; also : not capable " and that adequate
>> means
> "sufficient for a specific requirement" and "lawfully and reasonably
> sufficient". I guess I meant it as a negation of the last meaning,
> and even that is a bit too strong. Obviously the preprocessor is
> "capable", because hey, there's the code, but it's not, let me
> rephrase - very "fit" for the task.

The preprocessor is not designed for the task. Obviously it isn't ideal. The
difference is not just syntactic. There are also idioms that go with it. Even
so, those idioms are straightforward when you're familiar with the language and
become a relatively small amount of boilerplate and syntactic clutter.

> Wouldn't it be nicer if you just had one mechanism (true recursion or
> iteration) that does it all in one shot?

Yes, it would, but it isn't at the top of my list because I can already simulate
it in a plethora of ways. There are other things that I cannot do that are more
important to me.

Regards,
Paul Mensonides

Andrei Alexandrescu (See Website for Email)

unread,
Aug 22, 2004, 7:05:40 PM8/22/04
to
Paul Mensonides wrote:
> That said, along some lines I agree that having a different language is a
> disadvantage. Along others I don't. A different language exercises the brain
> and promotes new ways of doing things.

I'd be the first to agree with that. But isn't it preferrable to have
a good new language to start with and think up from that, instead of
having a poor new language that asks you to do little miracles to get
the most basic things done?

>>To say that that's just a syntactic cleanliness thing is a bit of a
>>stretch IMHO. By the same argument, any Turing complete language will
>>do at the cost of "some" syntactic cleanliness.
>
> For the most part, the difference is syntactic cleanliness. Without the
> boilerplate required for recursion, the primary implementation macro becomes:
>
> #define REPEAT_I(count, macro, data) \
> CHAOS_PP_WHEN(count)( \
> REPEAT_I(CHAOS_PP_DEC(count), macro, data) \
> macro(CHAOS_PP_DEC(count), data) \
> ) \
> /**/

Well if one removes some boilerplate code, C can do virtuals and
templates. That doesn't prove a lot.

I saw only convoluted code in the URL that Dave forwarded. I asked for
a good example here on the Usenet. You gave me an example. I commented
on the example saying that it drowns in details. Now you're telling me
that "if you remove the details in which the example is drowning" it
doesn't drown in them anymore. Well ok :o).

> The preprocessor is not designed for the task. Obviously it isn't ideal. The
> difference is not just syntactic. There are also idioms that go with it. Even
> so, those idioms are straightforward when you're familiar with the language and
> become a relatively small amount of boilerplate and syntactic clutter.
>
>>Wouldn't it be nicer if you just had one mechanism (true recursion or
>>iteration) that does it all in one shot?
>
> Yes, it would, but it isn't at the top of my list because I can already simulate
> it in a plethora of ways. There are other things that I cannot do that are more
> important to me.

That's reasonable. I just tried to give that as an example. Given that
there *are* things that you cannot do, I was hoping to increase your
(and others') motivation to look into ideas for a new preprocessor.


Andrei

llewelly

unread,
Aug 23, 2004, 6:24:15 AM8/23/04
to
jco...@taeus.com (Jerry Coffin) writes:

> Hyman Rosen <hyr...@mail.com> wrote in message news:<YEfVc.7145$si.4688@trndny06>...
>> Jerry Coffin wrote:

[snip]


> I'm not on the committee myself, but I've certainly conversed with a
> number of committee members, and ALL of them I've talked to have
> admitted that export has turned out to be substantially more difficult
> to implement than was expected.

And yet, they shot down Herb's proposal to remove it, 28 in favor of
keeping it, 8 against. (See
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1459.html)

Those in favor of keeping it included major implementors that don't
yet support it.

>
> So far, few people who've really used export seem to have found
> tremendous benefits to it either.
>
> By contrast, nobody seems to have been terribly surprised by the
> amount of work it has taken to implement exception handling, and most
> people seem to believe that its benefits are justify the work.
>
> The area where exception handling may have surprised some people
> and/or had unjustified costs is not in its implementation, but the
> burden for exception safety that's placed on the user. Though few
> people seem to be aware of it (yet), export has a similar effect -- it
> affects name lookup in ways most people don't seem to expect, and can
> substantially increase the burden on the user.

I think most of those effects come not from export but from two-phase
lookup.

> So, if you'd prefer, I'd rephrase the question: as I recall, it's
> claimed that EDG put something like three man-years of labor into
> implementing export, and that's not all it takes to implement it
> either. So far even if we take ALL the users into account, I've yet to
> see an indication that anybody has saved three man-years of labor
> because export was there.

[snip]

If export took 1095 days to implement, and there are 10000 users of
export, each of them need only save one hour, and export is a net
win for the community.

I don't know if there are 10000 users of export, but one thing people
seem to keep ignoring in all this cost/benefit discussion is the
sheer size if the C++ community; a feature can be extrodinarily
expensive for implementors, and yet be a net win even it is only
a small savings for each individual user.

Paul Mensonides

unread,
Aug 23, 2004, 6:34:19 AM8/23/04
to
"Andrei Alexandrescu (See Website for Email)"
<seewebsit...@moderncppdesign.com> wrote in message
news:4128D01F...@moderncppdesign.com...

> Paul Mensonides wrote:
> > That said, along some lines I agree that having a different language is a
> > disadvantage. Along others I don't. A different language exercises the
brain
> > and promotes new ways of doing things.
>
> I'd be the first to agree with that. But isn't it preferrable to have
> a good new language to start with and think up from that, instead of
> having a poor new language that asks you to do little miracles to get
> the most basic things done?

From an immediate productivity standpoint, yes. However, there are a lot of
ways that people think about code (in C++ for example) explicitly because of a
lack of direct language facility. Look at template metaprogramming for example
or SFINAE manipulation. Those things could be implemented better with direct
support from the language, but something beyond that or something completely
different won't be. Because of the kind of thinking that those existing
solutions engender, those other things may not be unreachable. Removing
boilerplate or indirect hacks by adding language features is a neverending
story. Granted, something like recursion is very basic, but then, there are
things that can be done because of the lack of recursion (e.g. I use it to
compare identifiers among other things). As I'm sure that you understand, doing
much with little is also satisfying in its own right. Besides, the "little
miracles" required can be hidden behind a library interface, as Chaos does. I
told you that that was basically doing it by hand. Here's a more high-level
variant:

#define TYPELIST(...) \
CHAOS_PP_EXPR(CHAOS_PP_TUPLE_FOLD_RIGHT( \
TYPELIST_OP, (__VA_ARGS__), Loki::NilType \
)) \
/**/
#define TYPELIST_OP(s, type, ...) \
Loki::Typelist<CHAOS_PP_DECODE(type), __VA_ARGS__> \
/**/

Though there is some, there isn't a lot of boilerplate here. Note also that
this itself is a library interface, and the ultimate boilerplate clutter is all
but gone when user's use the interface. The only relic that you have is that
you have to parenthesize types that contain open commas--which is insignificant.

TYPELIST(int, double, char)

It is also important to note the fundamental thing that is important. This
implementation, along with reusable library abstractions, regardless of
boilerplate, requires only about ten lines of code and completely replaces at
least fifty lines of really ugly repetition (i.e. the TYPELIST_x macros). Note
only does it replace it, but it expands the upper limit from fifty types to
about five thousand types--effectively removing any impact from the limit.

> Well if one removes some boilerplate code, C can do virtuals and
> templates. That doesn't prove a lot.

No, it doesn't, nor is it meant to. I'm merely pointing out that the
boilerplate is simple when you know what you are doing. It doesn't even get
close to "drowning in details".

> I saw only convoluted code in the URL that Dave forwarded.

Which URL was that? If Boost, then the code is drowning in workarounds. If
Chaos, then much of the code you saw is doing much more advanced things than
users need to ever see or do.

> I asked for
> a good example here on the Usenet. You gave me an example. I commented
> on the example saying that it drowns in details. Now you're telling me
> that "if you remove the details in which the example is drowning" it
> doesn't drown in them anymore. Well ok :o).

What I was saying is that the details in that it "drowns in", as you put it, are
nothing but boilerplate. Boilerplate is basically syntactic clutter that you
relatively easily learn to ignore when reading and writing code.

> > The preprocessor is not designed for the task. Obviously it isn't ideal.
The
> > difference is not just syntactic. There are also idioms that go with it.
Even
> > so, those idioms are straightforward when you're familiar with the language
and
> > become a relatively small amount of boilerplate and syntactic clutter.
> >
> >>Wouldn't it be nicer if you just had one mechanism (true recursion or
> >>iteration) that does it all in one shot?
> >
> > Yes, it would, but it isn't at the top of my list because I can already
simulate
> > it in a plethora of ways. There are other things that I cannot do that are
more
> > important to me.
>
> That's reasonable. I just tried to give that as an example. Given that
> there *are* things that you cannot do, I was hoping to increase your
> (and others') motivation to look into ideas for a new preprocessor.

To be more accurate, there is no code that cannot be generated. It is only a
question of how clean it is and whether it is clean enough to be an acceptable
solution.

As far as a new preprocessor is concerned, I just don't think that it will
happen. I do like the idea of a new kind of macro that can be recursive. That
is the main limitation preventing your imaginary sample syntax. Lack of
backlashes or overloading is insignificant compared to that. Even so, the
ability to process tokens individually, regardless of the type of preprocessing
token, is far more important than any of those, because with that ability you
can make interpreters for any syntax that you like--including advanced
domain-specific languages. (For example, consider a parser generator that
operates directly on grammar productions rather than encoded grammar
productions.) Such an ability would promote C++ to a near intentional
environment.

Regards,
Paul Mensonides

tom_usenet

unread,
Aug 23, 2004, 6:03:27 PM8/23/04
to
On 21 Aug 2004 12:00:50 -0400, Rob Williscroft <r...@freenet.co.uk>
wrote:

>tom_usenet wrote in news:7p8ci0lh7ifdlk04l...@4ax.com in
>comp.lang.c++:
>
>>
>> The alternative is to simply not have any kind of separate compilation
>> of templates.
>
>> So we could drop two phase name lookup,
>
>2 phase lookup, allows us to write determinate code (*), I really don't
>see what it has to do with seperate compilation, if anything, its a
>feature that makes export harder (the exported code wouldn't need
>to remeber the declaration context).

>*) By this I mean code that does what the author of the code
> intended and is subject to the minimum possible reinterpretation
> at the point of instantiation.

It is necessary for export though.

int helper(); //internal to definition.cpp

export template<class T>
void f()
{
helper(); //want to lookup in definition context!
}

That doesn't work without two-phase name lookup. I don't think two
phase name lookup is vital otherwise, since if the inclusion model is
used, names used in the template definitions can be made visible
simply by putting the definitions before the point of instantiation
(as is usually the case anyway).

The determinate code is a secondary issue, and I think a rare problem;
impl namespaces are generally employed to make sure internal template
gubbins won't be replaced by stuff from the point of instantiation.

>> "template" and
>
>Which "template", do you mean the .template, if so this is a
>parsing problem, the parser preferes < to mean "less-than" over
>"template-argument-list", It could be solved with some bactraking
>(I think :), but we would loose: "if ( 0 < i > 0 ) ... ".
>
>Or do you want code that is sometimes a member template or sometimes
>a non template member ?
>
>> "typename" disambiguators,
>
>Again AIUI typename is about writing determinate code.

It's also about allowing the compiler to pre-parse and syntax check
templates even if they aren't instantiated. I don't think it is a
common problem that someone passes something that evaluates to a
static member where a type was expected! template and typename are
annoying at best, and give people new to templates unnecessary (if
export isn't used) headaches.

>> and possibly some other nasty features,
>> basically moving to the basic template inclusion model implemented by
>> old Borland and Microsoft compilers (except without the bugs!)
>>
>> Hmm. It was simpler back then; I think two phase name lookup is still
>> extremely badly understood, as are the merits and otherwise of export.
>
>It was simpler, but also very confusing, the behaviour of you code
>depended on when you instantiated you templates. Everyting worked untill
>it it stoped working and then things just got strange (atleast thats how
>I remember it :).

This is the case even with two phase lookup, although I agree that
there is slightly less freedom for the template to change meaning at
instantiation time. But I really don't think that this was a common
cause of problems, compared to the errors people get through two-phase
name lookup related issues, was it?

Tom

Gabriel Dos Reis

unread,
Aug 23, 2004, 6:24:56 PM8/23/04
to
jco...@taeus.com (Jerry Coffin) writes:

[...]

| So, if you'd prefer, I'd rephrase the question: as I recall, it's
| claimed that EDG put something like three man-years of labor into
| implementing export, and that's not all it takes to implement it
| either.

But duing the period EDG implemented export, they also implemented a
complete Java front-end. It is not like when they were implementing
export, that was the only thing they were doing.

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Aug 23, 2004, 6:27:01 PM8/23/04
to
"Walter" <wal...@digitalmars.nospamm.com> wrote in message
news:<UuDVc.161558$8_6.45179@attbi_s04>...

> "tom_usenet" <tom_u...@hotmail.com> wrote in message
> news:7p8ci0lh7ifdlk04l...@4ax.com...
> > The alternative is to simply not have any kind of separate
> > compilation of templates. So we could drop two phase name lookup,
> > "template" and "typename" disambiguators, and possibly some other
> > nasty features, basically moving to the basic template inclusion
> > model implemented by old Borland and Microsoft compilers (except
> > without the bugs!)

> > Hmm. It was simpler back then; I think two phase name lookup is
> > still extremely badly understood, as are the merits and otherwise
> > of export. (How many people realise that point-of-instantiation
> > lookup of function names uses ADL only, and not ordinary lookup?)

> How it works in D is pretty simple. The template arguments are looked
> up in the context of the point of instantiation. The symbols inside
> the template body are looked up in the context of the point of
> definition. It's how you'd intuitively expect it to work, as it works
> analogously to ordinary function calls.

I'm not sure I understand this. Do you mean that in something like:

template< typename T >
void
f( T const& t )
{
g( t ) ;
}

g will be looked up in the context of the template definition, where the
actual type of its parameter is not known? And if so, what about:

template< typename Base >
class Derived : public Base
{
public:
void f()
{
this->g() ;
}
} ;

?

I'd say that templates must have the concept of dependent names, which
are looked up at the point of instantiation.

When people argue against two-phase lookup in C++, they are saying first
of all that all names should be considered dependent. And of course,
that dependent name lookup should work exactly like any other name
lookup, and not use special rules.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Jerry Coffin

unread,
Aug 23, 2004, 6:39:38 PM8/23/04
to
llewelly <llewe...@xmission.dot.com> wrote in message news:<86vffas...@Zorthluthik.local.bar>...

[ ... ]

> > I'm not on the committee myself, but I've certainly conversed with a
> > number of committee members, and ALL of them I've talked to have
> > admitted that export has turned out to be substantially more difficult
> > to implement than was expected.
>
> And yet, they shot down Herb's proposal to remove it, 28 in favor of
> keeping it, 8 against.

That's not too surprising, at least to me. First of all, removing a
keyword, feature, etc., from a language is a major step, and the
majority of the committee would have to be convinced that there was a
_major_ benefit from doing so before it would pass.

The reality is that most of them clearly consider the current
situation perectly acceptable: the standard requires export, but
virtually everybody ignores the requirement.

> Those in favor of keeping it included major implementors that don't
> yet support it.

That's not a major surprise, at least to me. It appears to me that
compiler vendors mostly fall into two camps: those who have already
implemented export, and those who have no plan to do so.

Those who've already implemented export are obviously motivated to
keep it.

Those who haven't mostly don't seem to care and have no plans to
implement it anyway.

The only vendors to whom it would be a major issue would be those who
have not implemented it, but figure they'll have to do so if it
remains in the standard. The vote more or less confirms my opinion
that this group is quite small.

The cost isn't primarily to the vendors -- it's to the users. The big
problem is that export is just the beginning of the proverbial
slippery slope. If full compliance appears achievable, most vendors
will try to achieve it, even if it means implementing a few things
don't really value.

If full compliance appears unachievable, or at least totally
unrealistic, then they're left to their own judgement about what
features to leave out. In this case, those last few features they'd
have implemented for full compliance are likely to be left out.

The result is that for most people, not only is export itself
unusable, but (if they care at all about portability) quite a few
other features are rendered unusable as well.

[ ... ]

> > The area where exception handling may have surprised some people
> > and/or had unjustified costs is not in its implementation, but the
> > burden for exception safety that's placed on the user. Though few
> > people seem to be aware of it (yet), export has a similar effect -- it
> > affects name lookup in ways most people don't seem to expect, and can
> > substantially increase the burden on the user.
>
> I think most of those effects come not from export but from two-phase
> lookup.

I'd agree, to some extent -- it just happens that in an inclusion
model, the effects of two-phase name lookup seem relatively natural,
but in an export model they start to seem quite unnatural.

[ ... ]

> If export took 1095 days to implement, and there are 10000 users of
> export, each of them need only save one hour, and export is a net
> win for the community.

First of all, I think this model of the cost is wrong to start with
(about which, see below). Second, even if the model was correct, the
numbers would still almost certainly be wrong: it assumes that the
people who write C++ compilers (specifically those who implement
export) are merely average C++ programmers.

I suspect only a small number of the very best programmers are capable
of implementing export at all. This means considerably _more_ time
needs to be saved than expended to reach the break-even point.



> I don't know if there are 10000 users of export, but one thing people
> seem to keep ignoring in all this cost/benefit discussion is the
> sheer size if the C++ community; a feature can be extrodinarily
> expensive for implementors, and yet be a net win even it is only
> a small savings for each individual user.

Not really -- to the user, the real cost of export isn't directly
measured in the number of hours it took to implement. The real cost is
the other features that could have been implemented with the same
effort.

As such, for export to work out as a net benefit to the user, we have
to assume that the compiler is close enough to perfect otherwise that
implementing export is the single most efficient use of the
implementors' time.

I doubt that's the case right now, and I don't think I can predict
that it will ever be the case either.

--
Later,
Jerry.

The universe is a figment of its own imagination.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Aug 23, 2004, 6:43:24 PM8/23/04
to

"llewelly" <llewe...@xmission.dot.com> wrote in message
news:86vffas...@Zorthluthik.local.bar...
> I don't know if there are 10000 users of export, but one thing people
> seem to keep ignoring in all this cost/benefit discussion is the
> sheer size if the C++ community; a feature can be extrodinarily
> expensive for implementors, and yet be a net win even it is only
> a small savings for each individual user.

In determining the benefit to users, one must also consider the improvements
to the compiler that are *not done* because of the heavy diversion of
resources to implementing export. A 2 to 3 man-year of implementation effort
should realistically result in a killer feature to be worth it.

Is export really the only improvement you want out of your existing C++
compiler?

Andrei Alexandrescu (See Website for Email)

unread,
Aug 23, 2004, 7:02:33 PM8/23/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message
news:FeKdnWxxMc6...@comcast.com...
[snip closing points that I agree with]

In an attempt to try to lure you into a discussion on would-be nice
features, here's a snippet from an email exchanged with a friend. It has to
do with the preprocessor distinguishing among tokens and more complex
expressions:

In my vision, a good macro system understands not only tokens, but also
other grammar nonterminals (identifiers, expressions, if-statements,
function-call-expressions, ...)

For example, let's think of writing a nice "min" macro. It should avoid
double evaluation by distinguishing atomic tokens (identifiers or integral
constants) from other expressions:

$define min(token a, token b) {
(a < b ? a : b)
}

$define min(a, b) {
min_fun(a, b)
}

Next we think, how about accepting any number of arguments. So we write:

$define min(token a, b $rest c) {
min(b, min(a, c))
}

$define min(a, b $rest c) {
min(a, min(b, c))
}

The code tries to find the most clever combination of inline operators ?:
and function calls to accomodate any number of arguments. In doing so, it
occasionally moves an identifier around in hope to catch as many identifier
pairs as possible. That's an example of a nice syntactic transformation.


Andrei

Hyman Rosen

unread,
Aug 24, 2004, 2:35:50 AM8/24/04
to
Walter wrote:
> In determining the benefit to users, one must also consider the improvements
> to the compiler that are *not done* because of the heavy diversion of
> resources to implementing export.

Ans pray tell what massive improvements did we see from the vendors who
didn't implement export?

Gabriel Dos Reis

unread,
Aug 24, 2004, 2:36:26 AM8/24/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

[...]

| For example, let's think of writing a nice "min" macro. It should avoid
| double evaluation by distinguishing atomic tokens (identifiers or integral
| constants) from other expressions:
|
| $define min(token a, token b) {
| (a < b ? a : b)
| }

By the end of the day, you may rediscover C++ templates... ;-p

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Aug 24, 2004, 2:37:53 AM8/24/04
to
* Gabriel Dos Reis:

In this case the "no actual experience" is very relevant; it says that no
compilers available to Andrei A. for project work, implement the feature.

So for all practical purposes the feature must be useless to him.

It certainly is 100% useless to me and anybody I know: we don't have it,
and note that this argument is from lack of experience... ;-)

It's 2004.

The Standard was published in 1998.


--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Rob Williscroft

unread,
Aug 24, 2004, 2:41:16 AM8/24/04
to
tom_usenet wrote in news:vkbji012l63qboq5d...@4ax.com in
comp.lang.c++.moderated:

> On 21 Aug 2004 12:00:50 -0400, Rob Williscroft <r...@freenet.co.uk>
> wrote:
>
>>tom_usenet wrote in news:7p8ci0lh7ifdlk04l...@4ax.com in
>>comp.lang.c++:
>>
>>>
>>> The alternative is to simply not have any kind of separate
>>> compilation of templates.
>>
>>> So we could drop two phase name lookup,
>>
>>2 phase lookup, allows us to write determinate code (*), I really
>>don't see what it has to do with seperate compilation, if anything,
>>its a feature that makes export harder (the exported code wouldn't
>>need to remeber the declaration context).
>
>>*) By this I mean code that does what the author of the code
>> intended and is subject to the minimum possible reinterpretation
>> at the point of instantiation.
>
> It is necessary for export though.
>
> int helper(); //internal to definition.cpp
>
> export template<class T>
> void f()
> {
> helper(); //want to lookup in definition context!
> }
>
> That doesn't work without two-phase name lookup.

Two-phase lookup does far more than is nessacery though.
All that needs to be done is that the declaration of helper is
remebered during instantiation, there is no need to exclude
overloads (*) that come from the instantiators (**) context.

*) Doesn't apply with the given example, but might for helper( T )
for example.

**) My apologies for making up words, but its hard to remember all the
Standard terms for all this.

> I don't think two
> phase name lookup is vital otherwise, since if the inclusion model is
> used, names used in the template definitions can be made visible
> simply by putting the definitions before the point of instantiation
> (as is usually the case anyway).
>
> The determinate code is a secondary issue, and I think a rare problem;

We have *very* different experiences :)

> impl namespaces are generally employed to make sure internal template
> gubbins won't be replaced by stuff from the point of instantiation.
>
>>> "template" and
>>
>>Which "template", do you mean the .template, if so this is a
>>parsing problem, the parser preferes < to mean "less-than" over
>>"template-argument-list", It could be solved with some bactraking
>>(I think :), but we would loose: "if ( 0 < i > 0 ) ... ".
>>
>>Or do you want code that is sometimes a member template or sometimes
>>a non template member ?
>>
>>> "typename" disambiguators,
>>
>>Again AIUI typename is about writing determinate code.
>
> It's also about allowing the compiler to pre-parse and syntax check
> templates even if they aren't instantiated.

Which is about writing determinate code is it not ?

> I don't think it is a
> common problem that someone passes something that evaluates to a
> static member where a type was expected! template and typename are
> annoying at best, and give people new to templates unnecessary (if
> export isn't used) headaches.

I agree they are annoying, particularly ".template", there has to be
a better solution:

(* off the top of my head *)

template < typename T >
struct some_class_template
{
template:

/* type must be a typename in *all* specialization's
*/
typename type;

/* get must be a template member-function */
template <> get();

public:

/* specialization things */
};

But we've got what we've got, and I for one think its better
than the pre-standard: "I've no idea what this code does, lets
instantiate it and see".

>
>>> and possibly some other nasty features,
>>> basically moving to the basic template inclusion model implemented
>>> by old Borland and Microsoft compilers (except without the bugs!)
>>>
>>> Hmm. It was simpler back then; I think two phase name lookup is
>>> still extremely badly understood, as are the merits and otherwise of
>>> export.
>>
>>It was simpler, but also very confusing, the behaviour of you code
>>depended on when you instantiated you templates. Everyting worked
>>untill it it stoped working and then things just got strange (atleast
>>thats how I remember it :).
>
> This is the case even with two phase lookup, although I agree that
> there is slightly less freedom for the template to change meaning at
> instantiation time.

Its a catch 22, templates are supposed to change meaning, but in an
orderly fashion, the typename and .template cruft aside I think the
committee sucsefully found the middle ground.

> But I really don't think that this was a common
> cause of problems, compared to the errors people get through two-phase
> name lookup related issues, was it?

It was for me, but its entirly possible my experiences wern't typical.

Rob.
--
http://www.victim-prime.dsl.pipex.com/

Walter

unread,
Aug 24, 2004, 2:41:38 AM8/24/04
to

"Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
news:m3u0uuf...@uniton.integrable-solutions.net...

> jco...@taeus.com (Jerry Coffin) writes:
> | So, if you'd prefer, I'd rephrase the question: as I recall, it's
> | claimed that EDG put something like three man-years of labor into
> | implementing export, and that's not all it takes to implement it
> | either.
>
> But duing the period EDG implemented export, they also implemented a
> complete Java front-end. It is not like when they were implementing
> export, that was the only thing they were doing.


Daveed Vandevoorde wrote 2002-06-08 in this group:
"As a point of data, it took EDG nearly two years to implement export.
During those two years,
most of our time was spent on export proper; the remaining time was spent on
other stuff (e.g., GNU C compatibility, bug fixes, etc.). Had we not
implemented export, we would have been able to
do many other interesting things and our response to improvement requests
would have been (even :-) better."

and:

in 2004-02-05 in microsoft.public.vc.stl:
"There are three implementors or export (the EDG employees):
I am one of them. I, myself, _am_ claiming that it can be used
that way."

so 3 man-years looks right. Certainly, other vendors can learn from EDG, and
Daveed has shown a willingness to help, and this may reduce the time
necessary. So 2 man-years for someone already competent with the internals
of a C++ compiler is not an unreasonable figure.

Andrei Alexandrescu (See Website for Email)

unread,
Aug 24, 2004, 2:42:29 AM8/24/04
to
"Andrei Alexandrescu (See Website for Email)"
<SeeWebsit...@moderncppdesign.com> wrote in message
news:2ov62mF...@uni-berlin.de...

> Next we think, how about accepting any number of arguments. So we write:
>
> $define min(token a, b $rest c) {
> min(b, min(a, c))
> }
>
> $define min(a, b $rest c) {
> min(a, min(b, c))
> }

Sorry, I saw this problem when proofreading my post---I always proofread
after I post :oD.

The above could recurse forever when a and b are both tokens. I meant:

$define min(token a, b $rest c) {
min(b, min(a, c))
}

$define min(a, token b $rest c) {
min(a, min(b, c))
}

$define min(a, b $rest c) {
min(a, min(b, c))
}

Bo Persson

unread,
Aug 24, 2004, 3:50:25 PM8/24/04
to

"Hyman Rosen" <hyr...@mail.com> skrev i meddelandet
news:U2wWc.6759$Ff2.2185@trndny06...

> Walter wrote:
> > In determining the benefit to users, one must also consider the
improvements
> > to the compiler that are *not done* because of the heavy diversion
of
> > resources to implementing export.
>
> Ans pray tell what massive improvements did we see from the vendors
who
> didn't implement export?

Managed Extensions, and C++/CLR. Plus the D language. :-)

The resources were obviously available, but somehow the priorities were
not what one could expect.


Bo Persson

Walter

unread,
Aug 24, 2004, 3:51:20 PM8/24/04
to

"Hyman Rosen" <hyr...@mail.com> wrote in message
news:U2wWc.6759$Ff2.2185@trndny06...

> Walter wrote:
> > In determining the benefit to users, one must also consider the
improvements
> > to the compiler that are *not done* because of the heavy diversion of
> > resources to implementing export.
>
> Ans pray tell what massive improvements did we see from the vendors who
> didn't implement export?

Microsoft did C++/CLI, for example. Your argument makes sense if C++
compilers are otherwise perfect, merely lacking export, and the
implementation engineers would otherwise just go on vacation for 2 years
<g>. I know that the list of improvements people want in Digital Mars C++
gets added to nearly daily. People want faster compiles, faster generated
code, better debugging, more targets, more platforms, more libraries, etc.

Walter

unread,
Aug 24, 2004, 3:55:29 PM8/24/04
to

<ka...@gabi-soft.fr> wrote in message
news:d6652001.04082...@posting.google.com...

> "Walter" <wal...@digitalmars.nospamm.com> wrote in message
> news:<UuDVc.161558$8_6.45179@attbi_s04>...
> > How it works in D is pretty simple. The template arguments are looked
> > up in the context of the point of instantiation. The symbols inside
> > the template body are looked up in the context of the point of
> > definition. It's how you'd intuitively expect it to work, as it works
> > analogously to ordinary function calls.
>
> I'm not sure I understand this. Do you mean that in something like:
>
> template< typename T >
> void
> f( T const& t )
> {
> g( t ) ;
> }
>
> g will be looked up in the context of the template definition, where the
> actual type of its parameter is not known? And if so, what about:
>
> template< typename Base >
> class Derived : public Base
> {
> public:
> void f()
> {
> this->g() ;
> }
> } ;
>
> ?
>
> I'd say that templates must have the concept of dependent names, which
> are looked up at the point of instantiation.

Think of it this way. In your first example, the argument for T (let's call
it arg) is looked up in the scope of the instantiation. Then, the scope of
the definition of f is loaded. The equivalent of:

typedef arg T;

is executed, which declares T to be of type arg, and the T is installed in
the scope of instantiation. Then, the semantic analysis of f happens in the
scope of the instantiation. T is found using normal lookup rules. It works
just like a function call does, except that the "parameter passing" into the
definition scope happens at compile time rather than run time. The analogous
happens with Base in the second example.

> When people argue against two-phase lookup in C++, they are saying first
> of all that all names should be considered dependent. And of course,
> that dependent name lookup should work exactly like any other name
> lookup, and not use special rules.

C++ tries to emulate this behavior with the two-phase lookup rules. But it's
hashed up - did you know you cannot declare a local with the same name as a
template parameter? It follows weird rules like that that are all its own.

tom_usenet

unread,
Aug 24, 2004, 6:29:51 PM8/24/04
to
On 24 Aug 2004 02:41:16 -0400, Rob Williscroft <r...@freenet.co.uk>
wrote:

> > It is necessary for export though.


> >
> > int helper(); //internal to definition.cpp
> >
> > export template<class T>
> > void f()
> > {
> > helper(); //want to lookup in definition context!
> > }
> >
> > That doesn't work without two-phase name lookup.
>
>Two-phase lookup does far more than is nessacery though.
>All that needs to be done is that the declaration of helper is
>remebered during instantiation, there is no need to exclude
>overloads (*) that come from the instantiators (**) context.

You are basically saying that name lookup could occur uniformly in
both the definition and instantiation contexts (with the contexts
merged), rather than with different rules in the two contexts as is
the case with current two phase name lookup rules. This lookup would
presumably all occur only when the template was instantiated (hence
"typename" would not necessarily be needed). This would be a possible,
but unnamed namespaces and static functions and variables add
complication. e.g.

namespace
{


int helper(); //internal to definition.cpp
}

export template<class T>
void f()
{
helper(); //want to lookup in definition context!
}

The usual fact that there are no name clashes in the anonymous
namespace would no longer hold, since the instantiation context might
also have a helper function. Making the call ambiguous would be
unhelpful, completely defeating the point of anonymous namespaces in
the first place.

One way around the problem would be to ignore names from anonymous
namespaces in the instantiation context, but what if the types used to
instantiate the template are from anonymous namespaces (which is
perfectly legal, since such names have external linkage).

Two phase name lookup solves these problems quite well, but these
problems don't occur at all in the inclusion model.

>*) Doesn't apply with the given example, but might for helper( T )
> for example.

helper(T) wouldn't be excluded under the current rules.

> > It's also about allowing the compiler to pre-parse and syntax check
> > templates even if they aren't instantiated.
>
>Which is about writing determinate code is it not ?

I think it's mostly about catching errors (usually just typos)
earlier, so it isn't a user of a template who finds the problem, but
the developer.

Tom

Michiel Salters

unread,
Aug 24, 2004, 6:34:18 PM8/24/04
to
jco...@taeus.com (Jerry Coffin) wrote in message news:<b2e4b04.04082...@posting.google.com>...

> llewelly <llewe...@xmission.dot.com> wrote in message news:<86vffas...@Zorthluthik.local.bar>...
>
> [ ... ]
>
> > > I'm not on the committee myself, but I've certainly conversed with a
> > > number of committee members, and ALL of them I've talked to have
> > > admitted that export has turned out to be substantially more difficult
> > > to implement than was expected.
> >
> > And yet, they shot down Herb's proposal to remove it, 28 in favor of
> > keeping it, 8 against.
>
> That's not too surprising, at least to me. First of all, removing a
> keyword, feature, etc., from a language is a major step, and the
> majority of the committee would have to be convinced that there was a
> _major_ benefit from doing so before it would pass.

True. The benefit would be only to vendors who hadn't implemented it,
and the benefit would be that they could also claim compliance.

> The reality is that most of them clearly consider the current
> situation perectly acceptable: the standard requires export, but
> virtually everybody ignores the requirement.

True, in a implementation sense. Marketing & Sales ca't ignore it,
they must sell a compiler as "ISO minus export".

> > Those in favor of keeping it included major implementors that don't
> > yet support it.
>
> That's not a major surprise, at least to me. It appears to me that
> compiler vendors mostly fall into two camps: those who have already
> implemented export, and those who have no plan to do so.

Actually, I remember a finer distinction. Some vendors shipped
their compilers with older EDG frontends, but were considering
newer versions. Others (can't remember for sure) may have shipped
an export-capable version, but with export disabled. (because they
were still evaluating?) That was a camp which was certainly
considering export.

> Those who've already implemented export are obviously motivated to
> keep it.
>
> Those who haven't mostly don't seem to care and have no plans to
> implement it anyway.
>
> The only vendors to whom it would be a major issue would be those who
> have not implemented it, but figure they'll have to do so if it
> remains in the standard. The vote more or less confirms my opinion
> that this group is quite small.

Market pressure doesn't happen overnight.

> The cost isn't primarily to the vendors -- it's to the users. The big
> problem is that export is just the beginning of the proverbial
> slippery slope.

Which slope? A slope to adding new features? I don't think so, just
have a look at the various numerical proposals. For the users? Don't
think so, most of them will deal with export only when they use a
library. Library writers will have to deal with export, but that's OK:
they reap the benefits, and on average they're more competent in
dealing with the problems of export (look at the boost library writers)

> If full compliance appears unachievable, or at least totally
> unrealistic, then they're left to their own judgement about what
> features to leave out. In this case, those last few features they'd
> have implemented for full compliance are likely to be left out.
>
> The result is that for most people, not only is export itself
> unusable, but (if they care at all about portability) quite a few
> other features are rendered unusable as well.

Actually, there are saner ways to decide what to include in version X+1
Goals could be:
* compile MC++ without workarounds
* compile current boost without workarounds
* compile all standard-compliant code without "export"

> > > The area where exception handling may have surprised some people
> > > and/or had unjustified costs is not in its implementation, but the
> > > burden for exception safety that's placed on the user. Though few
> > > people seem to be aware of it (yet), export has a similar effect -- it
> > > affects name lookup in ways most people don't seem to expect, and can
> > > substantially increase the burden on the user.
> >
> > I think most of those effects come not from export but from two-phase
> > lookup.
>
> I'd agree, to some extent -- it just happens that in an inclusion
> model, the effects of two-phase name lookup seem relatively natural,
> but in an export model they start to seem quite unnatural.

I've found that I could adjust to the effects. Basically, when you
take back one step, it's program hygieine. Don't hand out pointers
to private members, don't hand out functions using implementation
types .

Michiel Salters

Andrei Alexandrescu (See Website for Email)

unread,
Aug 24, 2004, 6:39:13 PM8/24/04
to
"Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
news:m3y8k5x...@uniton.integrable-solutions.net...

> "Andrei Alexandrescu \(See Website for Email\)"
> <SeeWebsit...@moderncppdesign.com> writes:
>
> [...]
>
> | For example, let's think of writing a nice "min" macro. It should avoid
> | double evaluation by distinguishing atomic tokens (identifiers or
> integral
> | constants) from other expressions:
> |
> | $define min(token a, token b) {
> | (a < b ? a : b)
> | }
>
> By the end of the day, you may rediscover C++ templates... ;-p

Not if this time you read my entire post :oD. (And the correction, too.)

The post was meant as an illustration of a syntactic transformation that
reorders an arbitrary-length list of arguments, be they identifiers,
numbers, or more complex expressions, into a combination of function calls
and inline operators. An optimizer could have done that, but it's a very
particular optimization that depends on the semantics of min (ordering - if
min(a, b) == a and min(b, c) == b, then min(a, c) == a).

Andrei

ka...@gabi-soft.fr

unread,
Aug 24, 2004, 6:41:58 PM8/24/04
to
Hyman Rosen <hyr...@mail.com> wrote in message
news:<U2wWc.6759$Ff2.2185@trndny06>...
> Walter wrote:

> > In determining the benefit to users, one must also consider the
> > improvements to the compiler that are *not done* because of the
> > heavy diversion of resources to implementing export.

> Ans pray tell what massive improvements did we see from the vendors
> who didn't implement export?

It's also interesting to note that the one vendor who does implement
export is also the only (or at least was the first) one to implement all
of the new name lookup rules, and a number of other features.

FWIW: the one vendor to implement export also has some of the best
support for legacy code, with options to support several different
models of template instantiation.

To add to it, it would be difficult to find a firm which contributed
more to the standards effort -- expressed as a percentage of total
resources, I suspect that it would be impossible.

On the other hand, I've never seen any advertising from that vendor;
that seems to be one thing they economize on.

Maybe if some other vendors would invest more in the technology, and
less in advertising...

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Daniel R. James

unread,
Aug 24, 2004, 6:42:47 PM8/24/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message news:<FeKdnWxxMc6...@comcast.com>...
> Even so, the
> ability to process tokens individually, regardless of the type of preprocessing
> token, is far more important than any of those, because with that ability you
> can make interpreters for any syntax that you like--including advanced
> domain-specific languages. (For example, consider a parser generator that
> operates directly on grammar productions rather than encoded grammar
> productions.) Such an ability would promote C++ to a near intentional
> environment.

One of the things I've been doing with the preprocessor is
implementing simple domain-specific languages. For Paul this example
will very basic, but it might interest Andrei.

I've implemented an alternative version of scope guard, using the
proprocessor to implement 'named parameters' which let's you write
something like:

void User::AddFriend(User& newFriend)
{
friends_.push_back(&newFriend);
SCOPEGUARD(guard,
using(UserCont, friends_)
on_failure(friends_.pop_back())
);
pDB_->AddFriend(GetName(), newFriend.GetName());
guard.dismiss();
}

That's an example from Andrei's original article, rewritten. Another
example:

FILE* fp=fopen("file.txt", "r");
SCOPEGUARD(guard,
using(FILE*, fp)
finally(fclose(fp))
);

You can have 'finally', 'on_failure' and 'on_success' actions, and
multiple 'using' statements. There's also some very basic syntax
checking, so that error messages aren't too bad. But they can
certainly be improved.

The code is available at:

http://www.calamity.org.uk/code/scope_guard.tar.gz

It requires boost and works on gcc, borland and visual c++ 6.5.

The implementation is very messy, but I think a lot of it can be
cleaned up and some of it factored out into libraries.

With a C99 preprocessor, and typeof/decltype it could be greatly
improved. The implementation would be a bit simpler and you wouldn't
have to specify the types in the using statements. It's not really
tested, so there are probably lots of bugs. I haven't done anything to
avoid name collisions either.

As far as domain specific languages go, the techniques used here to
implement the named parameters can be extended for more complex
languages, although the lack of recursion makes things tricky. It can
be used to parse any alphanumeric tokens, they don't have to be
function-style. I believe that Paul Mensonides has done implemented
similiar but much more sophisticated mini-languages as part of Chaos.

Daniel

Daniel R. James

unread,
Aug 24, 2004, 6:43:33 PM8/24/04
to
Hyman Rosen <hyr...@mail.com> wrote in message news:<U2wWc.6759$Ff2.2185@trndny06>...

> Ans pray tell what massive improvements did we see from the vendors who
> didn't implement export?

C++/CLI? ;)

Gabriel Dos Reis

unread,
Aug 25, 2004, 7:36:00 AM8/25/04
to
"Andrei Alexandrescu \(See Website for Email\)" <SeeWebsit...@moderncppdesign.com> writes:

| "Gabriel Dos Reis" <g...@integrable-solutions.net> wrote in message
| news:m3y8k5x...@uniton.integrable-solutions.net...
| > "Andrei Alexandrescu \(See Website for Email\)"
| > <SeeWebsit...@moderncppdesign.com> writes:
| >
| > [...]
| >
| > | For example, let's think of writing a nice "min" macro. It should avoid
| > | double evaluation by distinguishing atomic tokens (identifiers or
| > integral
| > | constants) from other expressions:
| > |
| > | $define min(token a, token b) {
| > | (a < b ? a : b)
| > | }
| >
| > By the end of the day, you may rediscover C++ templates... ;-p
|
| Not if this time you read my entire post :oD. (And the correction, too.)

I read your entire message. I did not find it appropriate to quote
about forty lines just add one. Which is why I kept the quote to a
minimum. Not an indication that I did not read past those lines.
As for the corrections, it showed up only after another around -- when I
already replied. Next time, make sure that you post the right tinhg ;-/

| The post was meant as an illustration of a syntactic transformation that
| reorders an arbitrary-length list of arguments, be they identifiers,
| numbers, or more complex expressions, into a combination of function calls
| and inline operators. An optimizer could have done that, but it's a very
| particular optimization that depends on the semantics of min (ordering - if
| min(a, b) == a and min(b, c) == b, then min(a, c) == a).

The key point of my message was that I would not be surprised if by
the time you think you have designed what you're looking for, you did
not just redicosver C++ templates, with a checking-system or concepts
that just include languinstic distinctions like Token and such.
I conjecture that only the syntax will differ (i.e. your "$").
But hey, that would not be the first time -- I just saw another
presentation last week on the very same topic (but with C as base
language not C++).

--
Gabriel Dos Reis
g...@integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Aug 25, 2004, 7:36:58 AM8/25/04
to

<ka...@gabi-soft.fr> wrote in message
news:d6652001.04082...@posting.google.com...
> On the other hand, I've never seen any advertising from that vendor;
> that seems to be one thing they economize on.
>
> Maybe if some other vendors would invest more in the technology, and
> less in advertising...

How much would you estimate Digital Mars spends on advertising? <g>

Hyman Rosen

unread,
Aug 25, 2004, 7:39:54 AM8/25/04
to
Walter wrote:
> Microsoft did C++/CLI, for example.

So instead of making their C++ standard-compliant, they used the
time saved by not implementing export to create a different language.
OK, I'm puzzled.

> I know that the list of improvements people want in Digital Mars C++
> gets added to nearly daily.

Yes. Does the list of improvements made get added to at the same rate
as well?

llewelly

unread,
Aug 25, 2004, 7:44:20 AM8/25/04
to
jco...@taeus.com (Jerry Coffin) writes:
[snip]

> The cost isn't primarily to the vendors -- it's to the users. The big
> problem is that export is just the beginning of the proverbial
> slippery slope. If full compliance appears achievable, most vendors
> will try to achieve it, even if it means implementing a few things
> don't really value.
[snip]

However - it seems several of those who've announced no intent to
implement export are nonetheless implementing everything else. So
I don't think we are sliding down that slippery slope.

>
> If full compliance appears unachievable, or at least totally
> unrealistic, then they're left to their own judgement about what
> features to leave out. In this case, those last few features they'd
> have implemented for full compliance are likely to be left out.
>
> The result is that for most people, not only is export itself
> unusable, but (if they care at all about portability) quite a few
> other features are rendered unusable as well.

Actually, I expect within a few more years, just about everything
execpt export will be portable.

>
> [ ... ]
>
>> > The area where exception handling may have surprised some people
>> > and/or had unjustified costs is not in its implementation, but the
>> > burden for exception safety that's placed on the user. Though few
>> > people seem to be aware of it (yet), export has a similar effect -- it
>> > affects name lookup in ways most people don't seem to expect, and can
>> > substantially increase the burden on the user.
>>
>> I think most of those effects come not from export but from two-phase
>> lookup.
>
> I'd agree, to some extent -- it just happens that in an inclusion
> model, the effects of two-phase name lookup seem relatively natural,
> but in an export model they start to seem quite unnatural.

[snip]

I don't know about that. I haven't used it, but I suspect it will
boil down to be a lot like the overloading rules - strange and
complex when examined in detail, with some infamous surprises -
but in practice, doing what most people expect most of the time.

llewelly

unread,
Aug 25, 2004, 7:44:48 AM8/25/04
to
dan...@calamity.org.uk (Daniel R. James) writes:

> Hyman Rosen <hyr...@mail.com> wrote in message news:<U2wWc.6759$Ff2.2185@trndny06>...
>> Ans pray tell what massive improvements did we see from the vendors who
>> didn't implement export?
>
> C++/CLI? ;)

Like export, whether or not it is an improvement is in dispute, and
it's no use to those who desire portability. Or haven't you been
paying attention here lately?

llewelly

unread,
Aug 25, 2004, 7:45:31 AM8/25/04
to
"Bo Persson" <b...@gmb.dk> writes:

> "Hyman Rosen" <hyr...@mail.com> skrev i meddelandet
> news:U2wWc.6759$Ff2.2185@trndny06...
>> Walter wrote:
>> > In determining the benefit to users, one must also consider the
> improvements
>> > to the compiler that are *not done* because of the heavy diversion
> of
>> > resources to implementing export.
>>
>> Ans pray tell what massive improvements did we see from the vendors
> who
>> didn't implement export?
>
> Managed Extensions, and C++/CLR. Plus the D language. :-)

[snip]

Much like export, these are of questionable benefit to those who need
portable C++ .

llewelly

unread,
Aug 25, 2004, 7:46:00 AM8/25/04
to
"Walter" <wal...@digitalmars.nospamm.com> writes:
[snip]

> Is export really the only improvement you want out of your existing C++
> compiler?

No. But the relativist analysis has been beat to death in the past,
so I thought I'd experiment with an absolutist angle.

ka...@gabi-soft.fr

unread,
Aug 25, 2004, 4:49:20 PM8/25/04
to
"Walter" <wal...@digitalmars.nospamm.com> wrote in message
news:<xlBWc.49332$Fg5.1311@attbi_s53>...

> > ?

> typedef arg T;

So in sum, symbols inside the template body are looked up in the context
of the point of instantiation, and not in the context of the point of
definition. To make it clearer:

extern void g( int ) ;

template< typename T >
void
f( T const& t )
{

g( t ) ; // #1
g( 1.5 ) ; // #2
}

User code, in another module... (I'm supposing we've done whatever is
necessary to use the f, above here. In C++, we've included the header
where f is defined. In D, I don't know what is necessary.)

void g( double ) {}

void
h()
{
f( 2.5 ) ; // #3
}

In the instantiation of f triggered by line #3, which g is called in
line #1? In line #2? If symbols are looked up in the context of the
definition, both lines should call g(int), because that is the only
symbol g visible at the point of instantiation. If symbols are looked
up at the point of instantiation (as is the case in traditional C++,
e.g. CFront, early Borland, etc.), then both lines call g(double). The
two phase look-up in standard C++ means that line #1 calls g(double),
and line #2 g(int).

> > When people argue against two-phase lookup in C++, they are saying
> > first of all that all names should be considered dependent. And of
> > course, that dependent name lookup should work exactly like any
> > other name lookup, and not use special rules.

> C++ tries to emulate this behavior with the two-phase lookup
> rules. But it's hashed up - did you know you cannot declare a local
> with the same name as a template parameter? It follows weird rules
> like that that are all its own.

There are, IMHO, two problems with two phased look-up as it is
implemented in C++. The first is that which names are dependant, and
which aren't, is determined by fairly subtle rules -- even when quickly
scanning f, above, you might miss the fact that the two g's are looked
up in entirely different contexts, and this is an extremely simple
case. (I'm not sure but what I wouldn't have prefered an explicit
declaration -- a name is only dependant if the template author
explicitly says it is.) The second is the fact that the name lookup is
not only in a different context, but follows subtly different rules.

I don't have enough experience with compilers implementing two phase
lookup correctly to say whether this will be a real problem in actual
practice. It may turn out to be like function overload resolution: the
rules are so complicated that almost no pratitioner begins to really
understand them, but in real code, provided a few, common sense rules
are followed, the results are what one would intuitively expect, and
there is no problem. I hope this is the case, but I'm not counting on
it. (Part of the problem, of course, is that many pratitioners already
have concrete experience with a different set of rules, which works
differently. Just being aware that two phase lookup exists, and knowing
a few simple rules to force dependency when you want it, goes a long
way.)

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

It is loading more messages.
0 new messages