Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Move template class implementation into CPP file

36 views
Skip to first unread message

pavel....@gmail.com

unread,
Apr 2, 2006, 2:06:02 PM4/2/06
to
Assume I have the code

File: MyClass.h:

template <typename C>
class MyClass
{
C m;
public:
void f();
};

How can I move implementation of f() into separate CPP? I would like to
write something like that:

File MyClass.cpp:

#include "MyClass.h"

void template <typename C>MyClass::f()
{
// actual implementation of f();
}
-----------------

My VC 2005 doesn't allow me to write as above.


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Thomas Tutone

unread,
Apr 3, 2006, 3:16:02 AM4/3/06
to
pavel....@gmail.com wrote:

> Assume I have the code
>
> File: MyClass.h:
>
> template <typename C>
> class MyClass
> {
> C m;
> public:
> void f();
> };
>
> How can I move implementation of f() into separate CPP? I would like to
> write something like that:
>
> File MyClass.cpp:
>
> #include "MyClass.h"
>
> void template <typename C>MyClass::f()

I think you mean:

template <typename C> void MyClass<C>::f()

> {
> // actual implementation of f();
> }

Of course, you'll either have to manually instantiate MyClass<C> for
every C you use outside of MyClass.cpp, or you'll get link errors. See
the FAQ:

http://www.parashift.com/c++-faq-lite/templates.html#faq-35.12

Best regards,

Tom

Gene Bushuyev

unread,
Apr 3, 2006, 3:21:57 AM4/3/06
to
<pavel....@gmail.com> wrote in message
news:1143978704....@i39g2000cwa.googlegroups.com...

> Assume I have the code
>
> File: MyClass.h:
>
> template <typename C>
> class MyClass
> {
> C m;
> public:
> void f();
> };
>
> How can I move implementation of f() into separate CPP? I would like to
> write something like that:
>
> File MyClass.cpp:
>
> #include "MyClass.h"
>
> void template <typename C>MyClass::f()
> {
> // actual implementation of f();
> }
> -----------------
>
> My VC 2005 doesn't allow me to write as above.


Because the syntax is incorrect. It should have been:

template<typename C>
void MyClass<C>::f();

But before you do that you should be aware that compiler will instantiate only
those templates that are used in that cpp file or explicitly instantiated there.
That's why in most cases you need to keep template implementation in the header
file so it can be available to compiler when the instantiation required and for
linker to find that instantiation.

P.S. I don't think you are interested in discussing exported templates here as
they are mostly left unimplemented and even when implemented not that useful
anyway.

--
Gene Bushuyev (www.gbresearch.com)
----------------------------------------------------------------
There is no greatness where there is no simplicity, goodness and truth. ~ Leo
Tolstoy

benben

unread,
Apr 3, 2006, 3:30:11 AM4/3/06
to

> Assume I have the code
>
> File: MyClass.h:
>
> template <typename C>
> class MyClass
> {
> C m;
> public:
> void f();
> };

If you want to put the implementation in your cpp file and compile it
separately you need to use the export keyword. Unfortunately, many C++
systems (including VC++) do not support this feature yet.

>
> How can I move implementation of f() into separate CPP? I would like to
> write something like that:
>
> File MyClass.cpp:
>
> #include "MyClass.h"
>
> void template <typename C>MyClass::f()
> {
> // actual implementation of f();
> }
> -----------------

The above code is invalid. It should have been:

template <typename C>
void MyClass<C>::f(void)


{
// actual implementation of f()
}

>

> My VC 2005 doesn't allow me to write as above.
>

With the above correction it should. However, it is very likely that you
will get linker errors. When MyClass.cpp gets compiled, the compiler
sees the definition of the template MyClass<>::f; but it doesn't know
what type C might be. The compiler only does a few syntactic checks and
delays the instantiation until it sees a call to MyClass<>::f or an
explicit instantiation.

Now consider:

// user.cpp

#include "MyClass.h"

int main()
{
MyClass<int> t;
t.f(); // implicit instantiation
}

The line that calls t.f() requests for an instantiation of
MyClass<int>::f(). Now the compiler doesn't see the definition of
MyClass<int>::f() since it is placed in MyClass.cpp, rather than in
user.cpp, it can't make an instantiation. What the compiler would do is
to assume that the instantiation is done elsewhere and generates a stub
for linking. The target, of course, is nowhere to be found(assuming no
explicit instantiation is made in MyClass.cpp) and hence the linker error.

Regards,
Ben

Bo Persson

unread,
Apr 4, 2006, 2:56:21 AM4/4/06
to

"Gene Bushuyev" <sp...@spamguard.com> skrev i meddelandet
news:N4XXf.54803$F_3....@newssvr29.news.prodigy.net...

> <pavel....@gmail.com> wrote in message
> news:1143978704....@i39g2000cwa.googlegroups.com...
>> Assume I have the code
>>
>> File: MyClass.h:
>>
>> template <typename C>
>> class MyClass
>> {
>> C m;
>> public:
>> void f();
>> };
>>
>> How can I move implementation of f() into separate CPP?

>


> P.S. I don't think you are interested in discussing exported
> templates here as
> they are mostly left unimplemented and even when implemented not
> that useful
> anyway.

Except that this is exactly the place where it would be useful !


Bo Persson

Axter

unread,
Apr 4, 2006, 2:57:52 AM4/4/06
to
pavel....@gmail.com wrote:
> Assume I have the code
>
> File: MyClass.h:
>
> template <typename C>
> class MyClass
> {
> C m;
> public:
> void f();
> };
>
> How can I move implementation of f() into separate CPP? I would like to
> write something like that:
>
> File MyClass.cpp:
>
> #include "MyClass.h"
>
> void template <typename C>MyClass::f()
> {
> // actual implementation of f();
> }
> -----------------
>
> My VC 2005 doesn't allow me to write as above.

Setting a side the implementation syntax error, you can't generally
seperate the template class declaration from the implementation unless
your compiler supports the export keyword. Which VC++ does not.

You can seperate it in a less generic way by using Explicit
Instantiation.
This is less generic, because you have to add an explicit instantiation
for each T type you intend to use.
Example code:
//MyClass.h


template <typename C>
class MyClass
{
C m;
public:
void f();
};

//MyClass.cpp
// *****start of MyClass.cpp


template <typename C>
void MyClass<C>::f(void)
{

// actual implementation of f()
}

template class MyClass<int>;// explicit instantiation for type int
template class MyClass<std::string>;// explicit instantiation for type
std::string
template class MyClass<float>;// explicit instantiation for type float

// *****end of MyClass.cpp


----------------------------------------------------------------------------------------
David Maisonave
http://axter.com

Author of Axter's policy based smart pointers
(http://axter.com/smartptr)
----------------------------------------------------------------------------------------

Gene Bushuyev

unread,
Apr 5, 2006, 3:42:33 AM4/5/06
to
"Bo Persson" <b...@gmb.dk> wrote in message news:49d014F...@individual.net...
[...]

>>
>> P.S. I don't think you are interested in discussing exported
>> templates here as
>> they are mostly left unimplemented and even when implemented not
>> that useful
>> anyway.
>
> Except that this is exactly the place where it would be useful !


You probably meant to say "it should be useful." That's what C++ committee
obviously thought when adding "export". Unfortunately, as it turned out,
exported templates still require the template definition to be available to the
compiler, so the only advantage it may have is making header files look cleaner,
probably at the expense of slower compilation. Well, if you have an IDE that can
collapse portions of code that you choose, even that small advantage disappears.

If you still think that exported templates are useful, I'm very interested to
hear where and how.

--
Gene Bushuyev (www.gbresearch.com)
----------------------------------------------------------------
There is no greatness where there is no simplicity, goodness and truth. ~ Leo
Tolstoy

Nicola Musatti

unread,
Apr 5, 2006, 3:47:19 AM4/5/06
to

Axter wrote:
[...]

> You can seperate it in a less generic way by using Explicit
> Instantiation.
> This is less generic, because you have to add an explicit instantiation
> for each T type you intend to use.

I do it by leaving the function definitions in the header file but
wrapping them in an #ifdef directive. This allows me to choose in which
.cpp file I explicitly instantiate each of my specializations. Thus I
do not decrease interdependencies at the source file level, but I
manage to reduce compilation time, especially since I can put under the
same conditional the inclusion of those headers that are not required
by the template declaration.

This is especially effective for me because I have many class templates
which I use as base classes where each specialization is only used as a
base for a single derived class, thus keeping track of where each
specialization is instantiated is not a problem.

Cheers,
Nicola Musatti

Alan McKenney

unread,
Apr 5, 2006, 3:50:07 AM4/5/06
to

benben wrote:
> > Assume I have the code
> >
> > File: MyClass.h:
> >
> > template <typename C>
> > class MyClass
> > {
> > C m;
> > public:
> > void f();
> > };
>
> If you want to put the implementation in your cpp file and compile it
> separately you need to use the export keyword. Unfortunately, many C++
> systems (including VC++) do not support this feature yet.
>
> >
> > How can I move implementation of f() into separate CPP? I would like to
> > write something like that:
> >
> > File MyClass.cpp:
> >
> > #include "MyClass.h"
> >
> > void template <typename C>MyClass::f()
> > {
> > // actual implementation of f();
> > }
> > -----------------

If the amount of actual code that depends upon the template argument
type ("C" in this case) is small, one approach is to move the
argument-independent
code into a separate non-template class, in a separate CPP file, and
then
have the template code/class use/contain/derive from the non-template
class.

It's not a general-purpose solution, and it doesn't eliminate the
implementation
code in the .h file entirely, but if the application lends itself to
this, it
can reduce the amount of code in the .h file, and also reduce the
amount
of code that needs to be recompiled if the type-independent part gets
changed.

-- Alan McKenney

Bo Persson

unread,
Apr 6, 2006, 3:08:32 AM4/6/06
to

"Gene Bushuyev" <sp...@spamguard.com> skrev i meddelandet
news:62qYf.64284$Jd.5...@newssvr25.news.prodigy.net...

> "Bo Persson" <b...@gmb.dk> wrote in message
> news:49d014F...@individual.net...
> [...]
>>>
>>> P.S. I don't think you are interested in discussing exported
>>> templates here as
>>> they are mostly left unimplemented and even when implemented not
>>> that useful
>>> anyway.
>>
>> Except that this is exactly the place where it would be useful !
>
>
> You probably meant to say "it should be useful." That's what C++
> committee
> obviously thought when adding "export". Unfortunately, as it turned
> out,
> exported templates still require the template definition to be
> available to the
> compiler, so the only advantage it may have is making header files
> look cleaner,
> probably at the expense of slower compilation.

Have you looked at this one?

http://www.comeaucomputing.com/4.0/docs/userman/export.html


Bo Persson

Gene Bushuyev

unread,
Apr 7, 2006, 3:48:17 AM4/7/06
to
"Bo Persson" <b...@gmb.dk> wrote in message news:49id6gF...@individual.net...

>
> "Gene Bushuyev" <sp...@spamguard.com> skrev i meddelandet
> news:62qYf.64284$Jd.5...@newssvr25.news.prodigy.net...
>> "Bo Persson" <b...@gmb.dk> wrote in message
>> news:49d014F...@individual.net...
[...]
>> You probably meant to say "it should be useful." That's what C++
>> committee
>> obviously thought when adding "export". Unfortunately, as it turned
>> out,
>> exported templates still require the template definition to be
>> available to the
>> compiler, so the only advantage it may have is making header files
>> look cleaner,
>> probably at the expense of slower compilation.
>
> Have you looked at this one?
>
> http://www.comeaucomputing.com/4.0/docs/userman/export.html
>

Yes, everybody knows about Greg's implementation. Now tell us your point. How
does that disproves anything I have said?

--
Gene Bushuyev (www.gbresearch.com)
----------------------------------------------------------------
There is no greatness where there is no simplicity, goodness and truth. ~ Leo
Tolstoy

Andrei Polushin

unread,
Apr 8, 2006, 6:02:55 AM4/8/06
to
Gene Bushuyev wrote:

> Bo Persson wrote:
>>> P.S. I don't think you are interested in discussing exported templates
>>> here as they are mostly left unimplemented and even when implemented
>>> not that useful anyway.
>> Except that this is exactly the place where it would be useful !
>
> You probably meant to say "it should be useful." That's what C++ committee
> obviously thought when adding "export". Unfortunately, as it turned out,
> exported templates still require the template definition to be available
> to the compiler,

Of course, the compiler will ever require the template definition,
because the compiler needs the source code to make a binary code.

> If you still think that exported templates are useful, I'm very
> interested to hear where and how.

Exported templates could be practically useful, because the separate
compilation is practically useful. They could be as useful, as those
.h and .c modules we use right now to separate interface from the
implementation.


When we talk about exported templates, we can directly pull the
expected benefits from the modular programming:

+ Separate compilation increases (re)compilation speed.

For example, let us have the declaration of basic_string in <string>
module and its definition in <string.t> module. To instantiate
basic_string<long>, `exporting' compiler would generate and compile
a module with an instantiation:

// basic_string_long.c
#include <string.t>
template basic_string<long>; // instantiate it

You can generate it yourself, but the `export' would be automation
(and it was an automation actually performed by Cfront).

As a result, when the definition of basic_string (in <string.t>)
changes, the "basic_string_long.c" is the only file gets recompiled.
And if you have that definition in <string>, all users will recompile
(this could be the whole program).

+ Elimination of through dependencies could affect maintainability.

For example, my <map> header indirectly #includes 37 files, compiles
36,300 lines of code, most of which are declarations, definitions
and macros (!) required by the std::map implementation itself.

Contrast to <map> declaration from the C++ standard: it has about 250
lines and does not #include any red-black trees or #defines macros
that may negatively affect my application code.

On the other hand, library developer uses special technique to avoid
some kind of dependencies, e.g. he will never #include headers that
are too large, or define too many macros.

+ We can avoid a source code exposure, _if_ it is possible to satisfy
our users with the several predefined binaries we distribute.

For example, in the C world we have an XML library called libxml2,
that typedefs its xmlChar to be `unsigned char'. Now I like it as is,
I don't need its sources, the binaries are enough. But when I will
need to change xmlChar into `wchar_t', I will get the library source,
change the required typedef, and patch code in several places.
So, I need the source code, because the binaries are not suitable.

Now imagine an XML library written in C++ with templates: each class
is parameterized by character type. As a vendor, I would provide its
binaries in two instantiations: for <char> and for <wchar_t>. If the
user will require an instantiation for <long>, he will need a source
code again.

And if the library is commercial, user should even pay for the source
code.

The same principle apply to the classes declared in <string>, <ios>,
<istream>, <ostream>, <fstream>, <sstream>, <streambuf>, <locale> -
and this is the half of the Standard Library.


There are several idealistic requirements from several people, making
the simple things too complex.

- The most idealistic is the requirement to hide the source code
completely, but (see XML example above) that kind of problem was not
solved even by the C world. It could be solved, however, by
/generics/ (Java-like or C#-like), with the same loss of efficiency.

The related assumption is that `exporting' should /compile/ template
definition somehow, and that's thought as a /separate compilation/.
Compile to what? Later, when instantiating a template, compiler will
need a source code of its definition. So, the result of /compiling/ a
template definition is the source code, again and again.

- Another well-known show-stopper is the impractical requirement of the
/context-dependent template instantiation/, that should honor the
context(s) in which the template is instantiated, according to the
C++ standard. But consider the context in case of explicit
instantiation:

// set_A.c -- created manually
#include <set.t> // defines std::set
#include <A.h> // declares A
// ... some context could be defined here ...
template std::set<A>; // instantiated explicitly

The answer is: the context is controlled by me (a programmer).

And if I can control it manually, leave if for me:

// set_A.c -- generated automatically
#include <set.t> // defines std::set
#include <A.h> // declares A
#if set_A_h_exists
#include "set_A.h" // optional: defines an instantiation context
#endif
/*exported*/ template std::set<A>; // instantiated by `export'

As a conclusion, I don't know why it was not widely implemented.

Hope it will be useful,

--
Andrei Polushin

kanze

unread,
Apr 10, 2006, 9:29:30 AM4/10/06
to
Andrei Polushin wrote:
> Gene Bushuyev wrote:
> > Bo Persson wrote:
> >>> P.S. I don't think you are interested in discussing
> >>> exported templates here as they are mostly left
> >>> unimplemented and even when implemented not that useful
> >>> anyway.
> >> Except that this is exactly the place where it would be useful !

> > You probably meant to say "it should be useful." That's what
> > C++ committee obviously thought when adding "export".
> > Unfortunately, as it turned out, exported templates still
> > require the template definition to be available to the
> > compiler,

> Of course, the compiler will ever require the template
> definition, because the compiler needs the source code to make
> a binary code.

At some point in time. Most of the compilers I've used in the
past could handle it if the source wasn't available until link
time.

Of course, the real advantage of exported templates is that the
template implementation only sees the context where I've
included the header when looking up dependant names (and never
for macros). And that if I modify the template, I only have to
compile one of the files which uses any particular instantiation
of it, not all of them.

And, of course, that the person who works on the template
implementation doesn't have to check out and modify the header.

FWIW: everyone I've talked to who has actually used export has
liked it. It's part of C++, and a compiler which doesn't
implement it isn't really a C++ compiler. (One is willing to
cut a little slack when the feature was new. But the current
standard is seven years old now.)

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

David Abrahams

unread,
Apr 11, 2006, 9:21:27 AM4/11/06
to
--=-=-=

"Andrei Polushin" <polu...@gmail.com> writes:

> - The most idealistic is the requirement to hide the source code
> completely, but (see XML example above) that kind of problem was not
> solved even by the C world. It could be solved, however, by
> /generics/ (Java-like or C#-like), with the same loss of efficiency.
>
> The related assumption is that `exporting' should /compile/ template
> definition somehow, and that's thought as a /separate compilation/.
> Compile to what? Later, when instantiating a template, compiler will
> need a source code of its definition. So, the result of /compiling/ a
> template definition is the source code, again and again.

You can actually do this in C++ today, with a little extra coding.
See the enclosed for example:


--=-=-=
Content-Type: text/x-cplusplus
Content-Disposition: inline; filename=generic_for_each.cpp

// Copyright David Abrahams 2005. Distributed under the Boost
// Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <vector>
#include <list>
#include <deque>
#include <boost/aligned_storage.hpp>
#include <boost/type_traits/alignment_of.hpp>

struct for_each_concepts
{
// Forward Iterator
void (*iter_copy)(void const* src, void* dst);
bool (*iter_not_equal)(void const*, void const*);
void (*iter_destroy)(void const*);
void* (*iter_deref)(void const*);
void (*iter_inc)(void*);

// Unary Function
void (*finvoke)(void const*f, void* arg);
};

// This part can be separately compiled.
void for_each_impl(
void const* start, void const* finish, void const* f
, void* temp, for_each_concepts const& model )
{
model.iter_copy(start, temp);
try
{
for ( void* p = temp;
model.iter_not_equal(p,finish);
model.iter_inc(p) )
{
model.finvoke(f, model.iter_deref(p) );
}
}
catch(...) { model.iter_destroy(temp); throw; }
model.iter_destroy(temp);
}

template <class T>
bool not_equal(void const* x, void const* y)
{
return *static_cast<T const*>(x)
!= *static_cast<T const*>(y);
}

template <class T>
void* deref(void const* x)
{
return &**static_cast<T const*>(x);
}

template <class T>
void inc(void* x)
{
++*static_cast<T*>(x);
}

template <class T>
void copy(void const* src, void* dst)
{
new (dst) T(*static_cast<T const*>(src));
}

template <class T>
void assign(void const* src, void* dst)
{
*static_cast<T*>(dst) = *static_cast<T const*>(src);
}

template <class T>
void destroy(void const* x)
{
static_cast<T const*>(x)->~T();
}

template <class F, class A>
void finvoke(void const* f, void* arg)
{
(*static_cast<F const*>(f))( *static_cast<A*>(arg) );
}

template <class ForwardIterator, class UnaryFunction>
inline void for_each(
ForwardIterator const& start, ForwardIterator const& finish,
UnaryFunction const& f
)
{
typedef ForwardIterator fi;

static for_each_concepts const models = {
copy<fi>, not_equal<fi>, destroy<fi>, deref<fi>, inc<fi>
, finvoke< UnaryFunction, typename std::iterator_traits<fi>::value_type>
};

boost::aligned_storage<
boost::alignment_of<fi>::value, sizeof(fi)
> storage;

for_each_impl( &start, &finish, &f, storage.address(), models );
}

struct increment
{
template <class T>
void operator()(T& x) const
{
++x;
}
};

void test_for_each(std::vector<int>& v, std::list<int>& l, std::deque<int>& d)
{
::for_each(l.begin(), l.end(), increment());
::for_each(v.begin(), v.end(), increment());
::for_each(d.begin(), d.end(), increment());
}

--=-=-=


--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

--=-=-=--

blwy10

unread,
Apr 17, 2006, 7:00:39 AM4/17/06
to
Read Herb Sutter's paper on export:

http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf

"Why can't we afford export"

It highlights real issues on export, and is a very credible paper. [as
to why, see for yourself :P]

Benjamin Lau

James Kanze

unread,
Apr 17, 2006, 4:42:37 PM4/17/06
to
blwy10 wrote:
> Read Herb Sutter's paper on export:

> http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf

> "Why can't we afford export"

> It highlights real issues on export, and is a very credible
> paper. [as to why, see for yourself :P]

This paper was widely discussed in the committee itself, and it
didn't seem to convince anyone -- certainly not the experts who
had actually worked with expert. (Herb was, in fact, taken to
town by some of the experts because his paper misrepresented
their views. Probably unintentional: I think in a couple of
cases, Herb took an anecdotal war story as a thorough
evaluation -- and of course, in war stories, everything is a
problem.)

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

Andrei Polushin

unread,
Apr 17, 2006, 5:31:57 PM4/17/06
to
blwy10 writes:
> Read Herb Sutter's paper on export:
> http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf
> "Why can't we afford export"

Yes, I have read it (or its appendix) before.

More links, if somebody wants to propose them:

* Comeau C++ Export Overview
<http://www.comeaucomputing.com/4.0/docs/userman/export.html>
<http://www.comeaucomputing.com/4.0/docs/userman/ati.html>

* Herb Sutter. "Export" Restrictions (included as appendix to n1426)
<http://www.gotw.ca/publications/mill23.htm>
<http://www.gotw.ca/publications/mill24.htm>

* Template instantiation in Cfront 3.0 (released in 1992)
Bjarne Stroustrup. "The Design and Evolution of C++", 15.10

* Details on Cfront implementation of `export' (newsgroup discussion)
<http://tinyurl.com/m8fal>

Other links and thoughts are welcome.


> It highlights real issues on export, and is a very credible paper.
> [as to why, see for yourself :P]

No, the paper is not credible, even if Herb Sutter is credible.
And it does not highlight real issues on `export', because it draws
its conclusions from one of the several ways to fail.

On the other hand, there was a successful implementation (Cfront 3.0)
that does exactly what we need: separate translation.

------

Just look at my demand:

I want several (not all) classes to be exported and provided to me
in the form of two separate source modules: .h for the interface and
.t for the implementation.

I want to include the .h file most of the time, and the compiler will
include .t file into a synthetic translation unit when it needs to
instantiate a template with another set of parameters.

Is it too hard to implement in the compiler? Does it require more than
3 human-years to implement? No (it's easy) and no (several months).
Could it be useful? Yes.

In particular, I am concerned of the following (STL) classes:

basic_string
char_traits
ios_base
basic_ios
basic_streambuf
basic_istream
basic_ostream
basic_iostream
basic_stringbuf
basic_istringstream
basic_ostringstream
basic_stringstream
basic_filebuf
basic_ifstream
basic_ofstream
basic_fstream
(see also the XML classes from my previous posting)

Do you want to compile their implementations each time you use them?

Note that I don't expect any `compile time' benefits from exporting
either STL containers, or STL algorithms, as they are instantiated
too often. (But exporting them could be a matter of dependencies.)

In fact, I insist on the following statement from n1426 (p. 4):

Export is certainly slower in most cases, but it is claimed that it
can sometimes improve build speed in carefully constructed cases.
(John Spicer and Daveed Vandevoorde characterize the latter as "hard
to imagine.")

but I suppose you can easily imagine that - see the classes above.

The implementation of `export' should be driven by practical needs,
nothing more required. It's both easy to implement and beneficial.

--
Andrei Polushin

Francis Glassborow

unread,
Apr 17, 2006, 5:33:09 PM4/17/06
to
In article <1145196268.8...@u72g2000cwu.googlegroups.com>,
blwy10 <blw...@gmail.com> writes

>Read Herb Sutter's paper on export:
>
>http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf
>
>"Why can't we afford export"
>
>It highlights real issues on export, and is a very credible paper. [as
>to why, see for yourself :P]

However WG21 decided at the Oxford Meeting that the subject was closed
and that we would not discuss export further. It is in the Standard and
has been correctly implemented and code relying on that implementation
is in the field (already was before that meeting) so C++ will live with
it.


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

P.J. Plauger

unread,
Apr 17, 2006, 5:31:22 PM4/17/06
to
"blwy10" <blw...@gmail.com> wrote in message
news:1145196268.8...@u72g2000cwu.googlegroups.com...

> Read Herb Sutter's paper on export:
>
> http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf
>
> "Why can't we afford export"
>
> It highlights real issues on export, and is a very credible paper. [as
> to why, see for yourself :P]

Nevertheless, the C++ committee soundly rejected the notion of removing
export, after a protracted discussion at Oxford (IIRC).

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Gene Bushuyev

unread,
Apr 18, 2006, 7:29:03 AM4/18/06
to
"Andrei Polushin" <polu...@gmail.com> wrote in message
news:1145287750....@z34g2000cwc.googlegroups.com...

> blwy10 writes:
>> Read Herb Sutter's paper on export:
>> http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf
[...]

> No, the paper is not credible, even if Herb Sutter is credible.
> And it does not highlight real issues on `export', because it draws
> its conclusions from one of the several ways to fail.

Why whould you say that? And what is the "real issue?"
Herb's is probably the most complete analysis we have thus far. In particular,
it demonstrates that "export" :

1) doesn't hide the source code;
2) doesn't speed up compilation; if anything the opposite is true;
3) doesn't reduce dependencies, at best it hides them from the programmer;
4) doesn't provide separate compilation benefits as in case of non-template
compilation units;
5) is more difficult to use than non-exported templates;

I omitted other subtle issues that Herb correctly identified. And the only real
advantage so far is prevention of macros leakage, which is basically a side
effect. There is also interface/implementation visible separation issue, but
existing IDEs that implement code folding make that a moot point.

And even though the standardization committee decided to keep the status quo, I
don't hear compiler vendors, who like boasting their standard compliance, eager
to anounce their plans to implement this feature. If anything, what I heard from
the presentations of some major compiler vendors that there will be no "export"
in the foreseeable future. And all basically repeat the same line: misfeature,
no customer demand.

>
> On the other hand, there was a successful implementation (Cfront 3.0)
> that does exactly what we need: separate translation.

Not according to Herb: "It's true that Cfront had some similar functionality a
decade earlier. But Cfront's implementation was slow, and it was based on a
"works most of the time" heuristic such that, when Cfront users encountered
template related build problems, a common first step to get rid of the problem
was to blow away the cache of instantiated templates and reinstantiate
everything from scratch."

--
Gene Bushuyev (www.gbresearch.com)
----------------------------------------------------------------
There is no greatness where there is no simplicity, goodness and truth. ~ Leo
Tolstoy

P.J. Plauger

unread,
Apr 18, 2006, 7:43:01 PM4/18/06
to
"Gene Bushuyev" <sp...@spamguard.com> wrote in message
news:AeW0g.66360$H71....@newssvr13.news.prodigy.com...

> "Andrei Polushin" <polu...@gmail.com> wrote in message
> news:1145287750....@z34g2000cwc.googlegroups.com...
>> blwy10 writes:
>>> Read Herb Sutter's paper on export:
>>> http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1426.pdf
> [...]
>> No, the paper is not credible, even if Herb Sutter is credible.
>> And it does not highlight real issues on `export', because it draws
>> its conclusions from one of the several ways to fail.
>
> Why whould you say that? And what is the "real issue?"
> Herb's is probably the most complete analysis we have thus far. In
> particular,
> it demonstrates that "export" :
>
> 1) doesn't hide the source code;

One of the ways export was sold to the committee was as a mechanism for
hiding source code, but that was never required by the C++ Standard.
You can hide source code without export and you can not hide it with
export -- the issues are orthogonal.

> 2) doesn't speed up compilation; if anything the opposite is true;

We've been shipping an export version of our Standard C++ library for
over three years now, so we have some experience in this area (perhaps
the only experience that can be independently verified, by any of our
customers, at least). We found that export is compilation speed neutral.
Again, this was one of the ways export was (over)sold to the committee,
but the C++ Standard cannot and does not require that compile times
improve with export.

> 3) doesn't reduce dependencies, at best it hides them from the programmer;

Here you're wrong. Using export provides a cleaner and easier to manage
compilation environment for exported code. And we've found that it also
keeps us more honest about subtle dependencies in our library. For
structuring large projects, export is definitely a Good Thing (TM).

> 4) doesn't provide separate compilation benefits as in case of
> non-template
> compilation units;

The benefits are pretty similar, though different.

> 5) is more difficult to use than non-exported templates;

Not really. It's just a different style of partitioning code.

> I omitted other subtle issues that Herb correctly identified. And the only
> real
> advantage so far is prevention of macros leakage, which is basically a
> side
> effect.

Not true. See above.

> There is also interface/implementation visible separation issue,
> but
> existing IDEs that implement code folding make that a moot point.

An exaggeration of IDE benefits, at best.

> And even though the standardization committee decided to keep the status
> quo, I
> don't hear compiler vendors, who like boasting their standard compliance,
> eager
> to anounce their plans to implement this feature. If anything, what I
> heard from
> the presentations of some major compiler vendors that there will be no
> "export"
> in the foreseeable future. And all basically repeat the same line:
> misfeature,
> no customer demand.

Right. And that's because the C++ committee ten years ago decided to ignore
the persistent warnings of implementers and put export in the language
anyway. They did the same thing with the namespace rules for C headers,
and several other requirements typically beyond the control of C++ folks.
And in all cases they reaped the whirlwind they sowed -- vendors had good
and proper reasons not to invest the effort required to get 100 per cent
conformance. Thus, the coercive effect of the C++ Standard was dissipated
by overreaching. Today there is only one source for a C++ front end that
fully parses the language -- Edison Design Group. And only one source for
a fully conforming C/C++ library -- Dinkumware, Ltd. I don't think that
either of us can claim a huge advantage in guaranteeing complete
conformance.
(If we did then you'd probably see other companies rushing to catch up.)
OTOH, I don't think that either of us is sorry for having done the whole
job -- at the least it has delivered fringe benefits.

But just because adding export was ill advised, that doesn't mean taking
it out is now the right thing to do. The committee will *not* regain the
power to coerce it squandered -- the clear message to vendors would be
to wait and see if any hard features are really going to stick around.
You could invest programmer years in getting a competitive advantage,
only to have it trashed by a fickle committee. Fuggedabahtit.

It's also specious to argue against export on technical grounds today.
All those issues were aired in 1996, and overridden by committee zeal
to require separate compilation of templates. It turns out that, on
balance, export *does* deliver some benefits, though arguably not worth
the cost of to compiler implementers. Arguing otherwise from bias,
ignorance, or wishful thinking is even worse; it simply clouds any
rational appraisal of a little-understood language feature.

>> On the other hand, there was a successful implementation (Cfront 3.0)
>> that does exactly what we need: separate translation.
>
> Not according to Herb: "It's true that Cfront had some similar
> functionality a
> decade earlier. But Cfront's implementation was slow, and it was based on
> a
> "works most of the time" heuristic such that, when Cfront users
> encountered
> template related build problems, a common first step to get rid of the
> problem
> was to blow away the cache of instantiated templates and reinstantiate
> everything from scratch."

Basically true. Though there was something to be said for codifying that
existing practice (and hence forcing cfront to clean up its act), rather
than mandating an 800-pound paper tiger (to mix a metaphor). Too late now,
however.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

vande...@gmail.com

unread,
Apr 19, 2006, 6:10:15 AM4/19/06
to

Gene Bushuyev wrote:
[...]

> Herb's is probably the most complete analysis we have thus far.

It depends on what you mean by "complete". It's probably the
most refuted analysis of export templates.

Before addressing your numbered points below, I should point out
that "export template" is a standard language construct: In and of
itself it doesn't guarantee much about certain aspects of an
implementation (like speed, user-interface, etc.). Nevertheless, it
may or may not enable certain properties of an implementation.

> In particular, it demonstrates that "export" :
>
> 1) doesn't hide the source code;

Exported templates can be compiled separately, after which the
source code is no longer needed for the instantiation process.

> 2) doesn't speed up compilation; if anything the opposite is true;

Separately compiled exported templates can contribute to
drastically reduced compilation times (primarily because the
alternative of re-parsing included templates is so expensive).

> 3) doesn't reduce dependencies, at best it hides them from the programmer;

Exported templates often drastically reduce effective
dependencies compared to included templates.

> 4) doesn't provide separate compilation benefits as in case of non-template
> compilation units;

Separately compiled exported templates provide several of the
same benefits as separately compiled functions and variables.
However, I estimate that in practice the interfacing mechanism
will be more expensive for exported templates than for functions
and variables.

> 5) is more difficult to use than non-exported templates;

That's really not quantifiable. My perception is that exported
templates are easier to use because of the reduced concerns
about inclusion-interference.


Note that I used to believe most of those statements above too
(prior to implementing and using exported templates). That
doesn't make them right, however.

Have you used exported templates (e.g., using the Intel or
Comeau compilers)?

[...]


> And even though the standardization committee decided to keep the status quo, I
> don't hear compiler vendors, who like boasting their standard compliance, eager
> to anounce their plans to implement this feature. If anything, what I heard from
> the presentations of some major compiler vendors that there will be no "export"
> in the foreseeable future. And all basically repeat the same line: misfeature,
> no customer demand.

It's a really hard feature to implement. It may make economical
sense not to implement export template, but I doubt any vendor
will market that decision as "we're just not competent enough to
implement it".

There is certainly plenty of demand of better build times.

[...]

Daveed

Andrei Polushin

unread,
Apr 19, 2006, 12:33:04 PM4/19/06
to
Gene Bushuyev writes:

> Andrei Polushin writes:
>> No, the paper is not credible, even if Herb Sutter is credible.
>> And it does not highlight real issues on `export', because it draws
>> its conclusions from one of the several ways to fail.
>
> Why whould you say that? And what is the "real issue?"
> Herb's is probably the most complete analysis we have thus far.

The analysis is the most complete in comparison with what? Are there
any other analytical papers publicly available?

The paper summarizes a single implementation, which "was not created
because of customer demand" (that's true), and draws several hasty
generalizations from that single case.

More importantly, this (rejected) paper become both a blind argument
for the opponents of `export' (like you), and an excuse for compiler
vendors (like Herb's company).

In my previous two posts, I've already demonstrate "how `export' could
be useful", and your answer shows that you didn't read them, sorry.


> In particular, it demonstrates that "export" :
>
> 1) doesn't hide the source code;

See my topmost post: `export' is not intended for that.
You should use explicit template instantiation mechanism instead.


> 2) doesn't speed up compilation; if anything the opposite is true;

It does, for carefully constructed cases, see my previous two posts.
Compilation speed is a *design-time issue*, just like it is with
modular programming.


> 3) doesn't reduce dependencies, at best it hides them from the programmer;

What does that /hides/ mean?

The template instantiation should be separately compiled, so the caller
of that template does not depend on the template definition. When the
template definition changes, its instantiations are recompiled, and the
callers only have to relink. Thus, the callers do not depend on either
the template definition, or its private types, or its private macros.

Just like in modular programming, did you expect more?


> 4) doesn't provide separate compilation benefits as in case of
> non-template compilation units;

I use term "separate compilation" in its original meaning: separate
compilation of template instantiations into object files, and I feel
myself more adequate.

The benefits are the same.


> 5) is more difficult to use than non-exported templates;

When we were a novices in C, we have difficulties to use modular
programming properly: my first experience was to #include all my
.c files into the .c file with main(). My compile takes too long,
because it was "difficult to use it properly".

And `export' is essentially the similar instrument for the similar
task. It is to be used properly, even if it is difficult for novices.


> I omitted other subtle issues that Herb correctly identified. And the only real
> advantage so far is prevention of macros leakage, which is basically a side
> effect. There is also interface/implementation visible separation issue, but
> existing IDEs that implement code folding make that a moot point.

:)


> And even though the standardization committee decided to keep the status quo, I
> don't hear compiler vendors, who like boasting their standard compliance, eager
> to anounce their plans to implement this feature. If anything, what I heard from
> the presentations of some major compiler vendors that there will be no "export"
> in the foreseeable future. And all basically repeat the same line: misfeature,
> no customer demand.

The feature was first misunderstood, and then widely advertised as a
silver bullet.


>> On the other hand, there was a successful implementation (Cfront 3.0)
>> that does exactly what we need: separate translation.
>
> Not according to Herb: "It's true that Cfront had some similar functionality a
> decade earlier. But Cfront's implementation was slow, and it was based on a
> "works most of the time" heuristic such that, when Cfront users encountered
> template related build problems, a common first step to get rid of the problem
> was to blow away the cache of instantiated templates and reinstantiate
> everything from scratch."

`Thus Spoke Herb'. Did you read the above as `Cfront implementation was
unsuccessful, out of its potential and cannot be enhanced anymore'?

Why?

--
Andrei Polushin

blwy10

unread,
Apr 19, 2006, 12:44:53 PM4/19/06
to
Note to the moderator: This post is meant as an apology.

Ah...I see...Sorry then for being misleading. I had no ill intention on
my part, but I really should have researched more closely rather than
just blindly giving a link and asserting the paper's credibility.

Yours apologetically
Benjamin

kanze

unread,
Apr 19, 2006, 12:47:27 PM4/19/06
to
Gene Bushuyev wrote:

> Herb's is probably the most complete analysis we have thus far.

Herb's analysis had the advantage of bringing most of the
objections together in a single paper. Other than that, it is
based on a number of technical misconceptions, which were
refuted by the standards committee.

> In particular,
> it demonstrates that "export" :
>
> 1) doesn't hide the source code;

Who cares? (This is actually an implementation issue, and an
implementation can use export to hide the source code.)

> 2) doesn't speed up compilation; if anything the opposite is true;

This is simply false. Try it. At worst, export doesn't change
the speed of compilation; in many cases, it improves it a
little; and in a few particular cases, it improves it a lot.

> 3) doesn't reduce dependencies, at best it hides them from the
> programmer;

Again, this is simply false. In fact, the reduction of
dependencies is always present, and is generally the dominant
reason in favor of adopting export.

> 4) doesn't provide separate compilation benefits as in case of
> non-template compilation units;

I don't understand what you are saying here.

> 5) is more difficult to use than non-exported templates;

That's not been my experience -- just the opposite.

> I omitted other subtle issues that Herb correctly identified.

Herb reported a number of issues which he'd heard about. People
actually using export, and the people who implemented it, report
differently.

> And the only real advantage so far is prevention of macros
> leakage, which is basically a side effect. There is also
> interface/implementation visible separation issue, but
> existing IDEs that implement code folding make that a moot
> point.

What does code folding have to do with who can check a file out
of the source code management system, under what conditions?
Without export, you don't use templates at the application level
in large projects. It's that simple.

> And even though the standardization committee decided to keep
> the status quo, I don't hear compiler vendors, who like
> boasting their standard compliance, eager to anounce their
> plans to implement this feature. If anything, what I heard
> from the presentations of some major compiler vendors that
> there will be no "export" in the foreseeable future. And all
> basically repeat the same line: misfeature, no customer
> demand.

Which major vendor is saying that? The last time I asked, the
two "vendors" I work with both expressed an intention to be 100%
conformant, including export. When, of course, is another
matter.

> > On the other hand, there was a successful implementation
> > (Cfront 3.0) that does exactly what we need: separate
> > translation.

> Not according to Herb: "It's true that Cfront had some similar
> functionality a decade earlier. But Cfront's implementation
> was slow, and it was based on a "works most of the time"
> heuristic such that, when Cfront users encountered template
> related build problems, a common first step to get rid of the
> problem was to blow away the cache of instantiated templates
> and reinstantiate everything from scratch."

CFront had a number of problems. And what it actually did
wasn't that similar to export -- it couldn't really be, since
name lookup in standard templates is completely different than
what it was in CFront.

--
James Kanze GABI Software

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

David Abrahams

unread,
Apr 19, 2006, 7:17:36 PM4/19/06
to
vande...@gmail.com writes:

>> 2) doesn't speed up compilation; if anything the opposite is true;
>
> Separately compiled exported templates can contribute to
> drastically reduced compilation times (primarily because the
> alternative of re-parsing included templates is so expensive).

Theoretically, exported templates can be compiled down to object code
-- not the usual object code that gets executed at runtime, but object
code that gets executed at compile time -- code that instantiates the
templates. That could potentially have a major impact on compilation
speed of code using those templates. It would be the difference
between an interpreted language (what compilers do now to instantiate
templates) and a compiled one.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

wka...@yahoo.com

unread,
Apr 19, 2006, 7:34:26 PM4/19/06
to
P.J. Plauger wrote:
...

> Thus, the coercive effect of the C++ Standard was dissipated
> by overreaching. Today there is only one source for a C++ front end that
> fully parses the language -- Edison Design Group.
...

I think there is a strong argument that, in the case of "export", the
problem was under- rather than overreaching. I'm not a compiler
developer, but it seems like a good "export" implementation could
be extended to support a general, "include-file-free" import/export
capability, along the lines of Modula 2 or Java. I would guess
there are enough of us who would want to use such a
capability that it would be profitable for compiler makers
to implement it.

Andrei Polushin

unread,
Apr 19, 2006, 8:22:10 PM4/19/06
to
James Kanze wrote:
> Gene Bushuyev wrote:

>> Andrei Polushin wrote:
>>> On the other hand, there was a successful implementation
>>> (Cfront 3.0) that does exactly what we need: separate
>>> translation.
>>
>> Not according to Herb: "It's true that Cfront had some similar
>> functionality a decade earlier. But [...]

>
> CFront had a number of problems. And what it actually did
> wasn't that similar to export -- it couldn't really be, since
> name lookup in standard templates is completely different than
> what it was in CFront.

I wish my compiler had at least CFront-style functionality 7 years
ago. Having that, I would have been 99% satisfied, even for the next
seven years. The remaining 1% is a strict conformance to those
context-dependent name lookup rules, which are also in the standard.

Thus, one part of the standard makes it too expensive to implement
another. Normally, the most desired feature should be implemented
first; the other features are to be either delayed or relaxed.
The real paradox is that they both left widely unimplemented.

Assuming you understand what I mean, what do you think about

* relaxing context-dependent name lookup rules for exported templates?

* defining some means to explicitly specify the instantiation context
for exported templates (like it's now for explicit instantiations)?

Should there be something like this, which will make the implementation
of `export' less expensive?

--
Andrei Polushin

Gene Bushuyev

unread,
Apr 20, 2006, 1:08:38 PM4/20/06
to
"Gene Bushuyev" <sp...@spamguard.com> wrote in message
news:AeW0g.66360$H71....@newssvr13.news.prodigy.com...
[...]

> Herb's is probably the most complete analysis we have thus far. In particular,
> it demonstrates that "export" :
>
> 1) doesn't hide the source code;
> 2) doesn't speed up compilation; if anything the opposite is true;
> 3) doesn't reduce dependencies, at best it hides them from the programmer;
> 4) doesn't provide separate compilation benefits as in case of non-template
> compilation units;
> 5) is more difficult to use than non-exported templates;
[...]

So, all the big guns hazing a little guy :-) Anyway, at the risk of seeming
heretical, I still want to continue discussing this topic.
Below I summarized the opinions expressed in this thread about the five points
of template export, adding my comments.

G.B.:


> 1) doesn't hide the source code;

Herb Sutter: source exposure for the definition remains.

P.J. Plauger: You can hide source code without export and you can not hide it


with export -- the issues are orthogonal.

Daveed Vandevoorde: Exported templates can be compiled separately, after which
the source code is no longer needed for the instantiation process.

Andrei Polushin: `export' is not intended for that

James Kanze: Who cares?

Well, I mentioned hiding the source code above, because that's what many people
expected in the beginning from "export" and undoubtly some are still expecting.
Irreversible conversion of the source code to binary is important for library
developers, it provides a reasonable protection of the intellectual property.
People expected from "export" the same for templates, but it ain't gonna happen,
the template definitions still must be available to the compiler at the point of
instantiation. When D.V. mentioned "source code" no longer needed, it's
definitely not the same as compilation into binary, because whatever the new
form source code assumes, it is (easily) reversible; same problem as with .Net
assemblies or Java bytecode. The reversible types of code hiding were tried
before, there is nothing specific to "export."

G.B.:


> 2) doesn't speed up compilation; if anything the opposite is true;

H.S.: it is unknown ... whether export-ized builds will in general be the same
speed, faster, or slower in common real-world use.

P.J.P.: We found that export is compilation speed neutral.

D.V.: Separately compiled exported templates can contribute to drastically
reduced compilation times

A.P.: It does, for carefully constructed cases.

J.K.: This is simply false. Try it.

I guess, the answers indicate that we don't really know. The real world testing
hasn't been done yet, one reason for which is the lack of compiler support. I
would assume that significantly more work is required from complier just to do
"export", e.g enforcing ODR can be a big task. And I don't see any savings
exclusively due to "export" that couldn't be also achieved without it. With or
without "export," compiler isn't required rebuilding all the files that use the
template, it just needs to cover all instantiations. Pre-compiled header files
can also reduce the burden of template parsing.

G.B.:


> 3) doesn't reduce dependencies, at best it hides them from the programmer;

H.S.: Remember that export only hides dependencies; it doesn't eliminate them.

P.J.P.: Here you're wrong.

D.V.: Exported templates often drastically reduce effective dependencies
compared to included templates.

A.P.: What does that /hides/ mean?

J.K.: Again, this is simply false.

Now, "wrong" and "false" aren't very helpful in resolving this issue.
When the template definition changes the other compilation units have to be
recompiled also to cover all instantiations. Compiler has to see the definitions
and all dependencies that are brought with them. Dependencies are hidden from
the programmer, but not from the compiler.

G.B.:


> 4) doesn't provide separate compilation benefits as in case of non-template
> compilation units;

H.S.: Exported templates are not truly "separately compiled" in the usual sense
we mean when we apply that term to functions.

P.J.P.: The benefits are pretty similar, though different.

D.V.:Separately compiled exported templates provide several of the same benefits


as separately compiled functions and variables.

A.P.: I use term "separate compilation" in its original meaning: separate


compilation of template instantiations into object files, and I feel
myself more adequate

J.K.: I don't understand what you are saying here.

I think, I need to elaborate what I meant in the fourth point. When we speak
about separate compilation of non-template functions, we expect that they are
compiled to the binary code and their definitions are not longer needed. That
brings many benefits, such as changes to the implementation require
recompilation only that file; the source code cannot be deduced from the binary,
the dependencies are contained in the implementation file, etc. That's not the
case with exported templates. When an exported implementation changes, not only
that file, but the instantiations of the template have to be recompiled. The
other things were already mentioned in previous items.

G.B.:


> 5) is more difficult to use than non-exported templates;


H.S.: "export" complicates the C++ language and makes it trickier to use,
including that export actually changes the fundamental meaning of parts of the
language in surprising ways that it is not clear were foreseen.

P.J.P.: Not really. It's just a different style of partitioning code.

D.V.: That's really not quantifiable. My perception is that exported templates


are easier to use because of the reduced concerns
about inclusion-interference.

A.P.: It is to be used properly, even if it is difficult for novices.

J.K.: That's not been my experience -- just the opposite.

This item is probably the most subjective one. What is surprising and
unintuitive to some people is not at all surprising to others. Still more
convoluted error messages are not to be ignored; with exported templates the
instantiation paths can be pretty long and confusing.

I'm sure we will have more definite answers after the majority of compiler
vendors implement this feature and people start using it in the large projects.

kanze

unread,
Apr 20, 2006, 1:10:04 PM4/20/06
to
wka...@yahoo.com wrote:
> P.J. Plauger wrote:
> ...
> > Thus, the coercive effect of the C++ Standard was
> > dissipated by overreaching. Today there is only one source
> > for a C++ front end that fully parses the language --
> > Edison Design Group.
> ...

> I think there is a strong argument that, in the case of
> "export", the problem was under- rather than overreaching.
> I'm not a compiler developer, but it seems like a good
> "export" implementation could be extended to support a
> general, "include-file-free" import/export capability, along
> the lines of Modula 2 or Java. I would guess there are enough
> of us who would want to use such a capability that it would be
> profitable for compiler makers to implement it.

And there is currently a proposal to do so before the C++
standardization committee. By none other than the person who
actually wrote the one existing implementation of export.

The problem with such proposals is in the details. I've read it
(quickly), and it seems just what the doctor ordered. But until
some one has actually used it, there's just no way of knowing if
there aren't some hidden subtilities which makes it difficult,
or even impossible, to use in a practical situation. So we have
a bit of a vicious circle: it's not reasonable to adopt it into
the standard without some experience with it, to get that
experience, we need an implementation, and the implementers
won't implement it until it has been adopted into the standard.

In the past, the circle was broken at the last step:
implementers often tried to add additional features to help
their customers, which in turn brought return orders from the
customers who used them. (This is sometimes thought of as
"locking customers in". It has both good and bad sides, but it
is necessary to some degree if we are to advance.)

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Apr 20, 2006, 1:10:48 PM4/20/06
to
Andrei Polushin wrote:
> James Kanze wrote:
> > Gene Bushuyev wrote:
> >> Andrei Polushin wrote:
> >>> On the other hand, there was a successful implementation
> >>> (Cfront 3.0) that does exactly what we need: separate
> >>> translation.

> >> Not according to Herb: "It's true that Cfront had some
> >> similar functionality a decade earlier. But [...]

> > CFront had a number of problems. And what it actually did
> > wasn't that similar to export -- it couldn't really be,
> > since name lookup in standard templates is completely
> > different than what it was in CFront.

> I wish my compiler had at least CFront-style functionality 7
> years ago.

I agree, and I would have been happy had the CFront bugs been
fixed, rather than inventing something new.

> Having that, I would have been 99% satisfied, even
> for the next seven years. The remaining 1% is a strict
> conformance to those context-dependent name lookup rules,
> which are also in the standard.

> Thus, one part of the standard makes it too expensive to
> implement another. Normally, the most desired feature should
> be implemented first; the other features are to be either
> delayed or relaxed. The real paradox is that they both left
> widely unimplemented.

> Assuming you understand what I mean, what do you think about

> * relaxing context-dependent name lookup rules for exported templates?

> * defining some means to explicitly specify the instantiation context
> for exported templates (like it's now for explicit instantiations)?

> Should there be something like this, which will make the
> implementation of `export' less expensive?

I don't think it would change much. Export isn't cheap, but
neither is two phased lookup or any number of other things. The
problem is more political than anything else -- companies like
Microsoft or Sun certainly have the resources to implement
export if they wanted to; they just prefer to use them on other
things (CLI or Java, for example).

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

wka...@yahoo.com

unread,
Apr 21, 2006, 7:39:47 AM4/21/06
to

David Abrahams wrote:
> vande...@gmail.com writes:
>
>>> 2) doesn't speed up compilation; if anything the opposite is true;
>>
>> Separately compiled exported templates can contribute to
>> drastically reduced compilation times (primarily because the
>> alternative of re-parsing included templates is so expensive).
>
> Theoretically, exported templates can be compiled down to object code
> -- not the usual object code that gets executed at runtime, but object
> code that gets executed at compile time -- code that instantiates the
> templates. That could potentially have a major impact on compilation
> speed of code using those templates. It would be the difference
> between an interpreted language (what compilers do now to instantiate
> templates) and a compiled one.
...

Couldn't include files also be compiled to a form of object code? C++
compilers typically use bottom-up parsing (I think). So the fact
that (in general) you can't build a single parse tree (with start
symbol
at the root) for an include file (whereas you can for an implementation
unit containing exported template(s)) is not that significant.

I know that (at least at one time), the Borland compiler "pre-
compiled" include files, but I think maybe it just tokenized.

Andrei Alexandrescu (See Website For Email)

unread,
Apr 21, 2006, 7:40:57 AM4/21/06
to
Andrei Polushin wrote:
> I wish my compiler had at least CFront-style functionality 7 years
> ago. Having that, I would have been 99% satisfied, even for the next

> seven years. The remaining 1% is a strict conformance to those
> context-dependent name lookup rules, which are also in the standard.

By the way, my current understanding of the situation is that
context-dependent name lookup (aka two-phase name lookup) is the main
(only?) significant problem wrt implementing export. Is that correct?

If that's the case, it's not surprising that export is hard to
implement; I believe context-dependent name lookup is a no-no for all
good language design, and the arch enemy of modular reasoning. So I am
curious: what justified the existence of two-phase name lookup in the
first place? Would it be reasonable to phase out this dirty
("un-hygienic") feature?


Andrei

P.J. Plauger

unread,
Apr 21, 2006, 1:03:13 PM4/21/06
to
"Gene Bushuyev" <sp...@spamguard.com> wrote in message
news:zWG1g.50507$_S7....@newssvr14.news.prodigy.com...

> "Gene Bushuyev" <sp...@spamguard.com> wrote in message
> news:AeW0g.66360$H71....@newssvr13.news.prodigy.com...
> [...]
>> Herb's is probably the most complete analysis we have thus far. In
>> particular,
>> it demonstrates that "export" :
>>
>> 1) doesn't hide the source code;
>> 2) doesn't speed up compilation; if anything the opposite is true;
>> 3) doesn't reduce dependencies, at best it hides them from the
>> programmer;
>> 4) doesn't provide separate compilation benefits as in case of
>> non-template
>> compilation units;
>> 5) is more difficult to use than non-exported templates;
> [...]
>
> So, all the big guns hazing a little guy :-)

I see the smiley, but that doesn't alter the fact that Sutter gave a
glib summary from incomplete data and he was soundly countered by those
of us who had some real experience.

> Anyway, at the risk of seeming
> heretical, I still want to continue discussing this topic.
> Below I summarized the opinions expressed in this thread about the five
> points
> of template export, adding my comments.
>
> G.B.:
>> 1) doesn't hide the source code;
>
> Herb Sutter: source exposure for the definition remains.
>
> P.J. Plauger: You can hide source code without export and you can not hide
> it
> with export -- the issues are orthogonal.
>
> Daveed Vandevoorde: Exported templates can be compiled separately, after
> which
> the source code is no longer needed for the instantiation process.
>
> Andrei Polushin: `export' is not intended for that
>
> James Kanze: Who cares?
>
> Well, I mentioned hiding the source code above, because that's what many
> people
> expected in the beginning from "export" and undoubtly some are still
> expecting.

And you clipped my more detailed response to that.

> Irreversible conversion of the source code to binary is important for
> library
> developers, it provides a reasonable protection of the intellectual
> property.

Nope. RogueWave tried in the mid 1990s to require compiler vendors to
shroud their library code. They were soundly rebuffed and retreated
from that extreme position. I, and later my company Dinkumware, Ltd.,
have always relied on copyright protection, which better fits the
demands of the marketplace. Users want access to the source and that's
what they're going to get.

> People expected from "export" the same for templates, but it ain't gonna
> happen,
> the template definitions still must be available to the compiler at the
> point of
> instantiation. When D.V. mentioned "source code" no longer needed, it's
> definitely not the same as compilation into binary, because whatever the
> new
> form source code assumes, it is (easily) reversible; same problem as with
> .Net
> assemblies or Java bytecode. The reversible types of code hiding were
> tried
> before, there is nothing specific to "export."

Sorry, but the lossy conversion to binary can be reverse engineered,
and the lossy conversion of templates to some intermediate can also
be reversed. IME you have to be pretty dedicated (read: don't have a
social life) to revel in what you can extract from either inverse
transformation. Shrouding does work well enough, if you care. But,
as I indicated above, in real life it is neither necessary nor
desirable. Red herring.

> G.B.:
>> 2) doesn't speed up compilation; if anything the opposite is true;
>
> H.S.: it is unknown ... whether export-ized builds will in general be the
> same
> speed, faster, or slower in common real-world use.
>
> P.J.P.: We found that export is compilation speed neutral.
>
> D.V.: Separately compiled exported templates can contribute to drastically
> reduced compilation times
>
> A.P.: It does, for carefully constructed cases.
>
> J.K.: This is simply false. Try it.
>
> I guess, the answers indicate that we don't really know.

Excuse me, but the answer is that Dinkumware *really does know.* We have
a working implementation that we've exercised. What does it take to
"know" by your metric?

> The real world testing
> hasn't been done yet,

And what world do you think *we* live in?

> one reason for which is the lack of compiler support.

Excuse me again, but we got our real-world experience from a real-world
compiler. See http://www.edg.com.

> I
> would assume that significantly more work is required from complier just
> to do
> "export", e.g enforcing ODR can be a big task.

Which EDG has done.

> And I don't see any savings
> exclusively due to "export" that couldn't be also achieved without it.
> With or
> without "export," compiler isn't required rebuilding all the files that
> use the
> template, it just needs to cover all instantiations. Pre-compiled header
> files
> can also reduce the burden of template parsing.

In a different way, yes. But that's solving a different problem in a
different way.

> G.B.:
>> 3) doesn't reduce dependencies, at best it hides them from the
>> programmer;
>
> H.S.: Remember that export only hides dependencies; it doesn't eliminate
> them.
>
> P.J.P.: Here you're wrong.
>
> D.V.: Exported templates often drastically reduce effective dependencies
> compared to included templates.
>
> A.P.: What does that /hides/ mean?
>
> J.K.: Again, this is simply false.
>
> Now, "wrong" and "false" aren't very helpful in resolving this issue.

Neither is the bald statement "doesn't reduce dependencies". You *were*
given more concrete reasons, which you've again chosen to elide.

> When the template definition changes the other compilation units have to
> be
> recompiled also to cover all instantiations.

Yep. And when they don't they don't. But that wasn't the issue addressed
here.

> Compiler has to see the
> definitions
> and all dependencies that are brought with them. Dependencies are hidden
> from
> the programmer, but not from the compiler.

There are dependencies and dependencies. That was the point you seem
either to have missed or to have glossed over.

And when they don't they don't. Can we say that another way so you can
hear it?

> G.B.:
>> 5) is more difficult to use than non-exported templates;
>
>
> H.S.: "export" complicates the C++ language and makes it trickier to use,
> including that export actually changes the fundamental meaning of parts of
> the
> language in surprising ways that it is not clear were foreseen.
>
> P.J.P.: Not really. It's just a different style of partitioning code.
>
> D.V.: That's really not quantifiable. My perception is that exported
> templates
> are easier to use because of the reduced concerns
> about inclusion-interference.
>
> A.P.: It is to be used properly, even if it is difficult for novices.
>
> J.K.: That's not been my experience -- just the opposite.
>
> This item is probably the most subjective one. What is surprising and
> unintuitive to some people is not at all surprising to others. Still more
> convoluted error messages are not to be ignored; with exported templates
> the
> instantiation paths can be pretty long and confusing.

In words of one syllable -- the style we use to write exported templates
differs only in details from the style we use when you can't export them.
The diagnostics we get with exported templates are a bit more precise,
but otherwise qualitatively similar to those we get when you can't export
them. Call that subjective if you wish, but it's based on over a dozen
years of writing templates in commercial code, both with and without
export.

> I'm sure we will have more definite answers after the majority of compiler
> vendors implement this feature and people start using it in the large
> projects.

I agree. But you seem pretty quick to dismiss the definite answers that
are available today, at least when they disagree with your biases.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

vande...@gmail.com

unread,
Apr 21, 2006, 1:00:34 PM4/21/06
to

Gene Bushuyev wrote:
[...]

> G.B.:
> > 1) doesn't hide the source code;
>
> Herb Sutter: source exposure for the definition remains.
>
> P.J. Plauger: You can hide source code without export and you can not hide it
> with export -- the issues are orthogonal.
>
> Daveed Vandevoorde: Exported templates can be compiled separately, after which
> the source code is no longer needed for the instantiation process.
>
> Andrei Polushin: `export' is not intended for that
>
> James Kanze: Who cares?
>
> Well, I mentioned hiding the source code above, because that's what many people
> expected in the beginning from "export" and undoubtly some are still expecting.

Not unreasonably so (even though it was indeed not designed with
that goal as a requirement; in practice it achieves the goal though).

> Irreversible conversion of the source code to binary is important for library
> developers, it provides a reasonable protection of the intellectual property.
> People expected from "export" the same for templates, but it ain't gonna happen,
> the template definitions still must be available to the compiler at the point of
> instantiation.

Just like object code must be available at link time.

> When D.V. mentioned "source code" no longer needed, it's
> definitely not the same as compilation into binary, because whatever the new
> form source code assumes, it is (easily) reversible;

What do you base that statement on?

I suspect a compiled exported template will always contain more
symbolic info
than your typical object code (except maybe when using Dave Abrahams
idea elsewhere in this thread). However, to say that it would be
easily reversible
is a stretch unless you can back it up with solid arguments.

> same problem as with .Net assemblies or Java bytecode.

Those have well-documented representations, so it's easier to do.

Even so, byte code is apparently not easy enough to keep companies from
distributing their software in that form (whereas quite a few vendors
have
expressed that they won't templatize their code because it would - e.g.
-
expose their particular numerical algorithms).

(Now, I'm not saying that "export" is the only means to achieve the
goal.
After all, some vendors seem to be happy to distribute their templates
by
just mechanically "uglifying" their source. That approach is easier to
reverse than byte codes still, but it seems acceptable to many. For
those
who _really_ care, even object code is not good enough, after all.)

> The reversible types of code hiding were tried
> before, there is nothing specific to "export."

Again, have you implemented export to be able to make such
a statement?

> G.B.:
> > 2) doesn't speed up compilation; if anything the opposite is true;
>
> H.S.: it is unknown ... whether export-ized builds will in general be the same
> speed, faster, or slower in common real-world use.
>
> P.J.P.: We found that export is compilation speed neutral.
>
> D.V.: Separately compiled exported templates can contribute to drastically
> reduced compilation times
>
> A.P.: It does, for carefully constructed cases.
>
> J.K.: This is simply false. Try it.
>
> I guess, the answers indicate that we don't really know. The real world testing
> hasn't been done yet, one reason for which is the lack of compiler support. I
> would assume that significantly more work is required from complier just to do
> "export", e.g enforcing ODR can be a big task.

It depends what you mean by "big task". It's big in the sense that it
is a pain to
code up (I know: I did it), but it's a blip on the execution profile
(i.e., cheap in
terms of the compilation time).

> And I don't see any savings
> exclusively due to "export" that couldn't be also achieved without it.

You'd need an alternative mechanism. Precompiled headers have some (but
not all) of the speedup benefits of exported templates, but they have
their own
serious practical downsides. The C++ modules proposal I'm championing
gives
you the equivalent of exported templates (and exported
just-about-anything),
but it requires larger source changes to take advantage of.

> With or
> without "export," compiler isn't required rebuilding all the files that use the
> template, it just needs to cover all instantiations. Pre-compiled header files
> can also reduce the burden of template parsing.

But not the dependencies. PCHs are also considerable more
expensive to create that compiled exported templates, and the
conditions for their reuse are quite constraining.

> G.B.:
> > 3) doesn't reduce dependencies, at best it hides them from the programmer;
>
> H.S.: Remember that export only hides dependencies; it doesn't eliminate them.
>
> P.J.P.: Here you're wrong.
>
> D.V.: Exported templates often drastically reduce effective dependencies
> compared to included templates.
>
> A.P.: What does that /hides/ mean?
>
> J.K.: Again, this is simply false.
>
> Now, "wrong" and "false" aren't very helpful in resolving this issue.

They're about as helpful as unsubstantiated claims,
I suppose.

> When the template definition changes the other compilation units have to be
> recompiled also to cover all instantiations.

No: Only the affected instantiations must be recreated. Even in
the demo version of EDG that amounts to only one point-of-instantiation
per specialization (as opposed to an included template change that
in practice forces the recompilation of every TU that directly or
indirectly
includes the file with the template... even if the template is unused
in
that TU).

> Compiler has to see the definitions
> and all dependencies that are brought with them.

I'd recommend using export templates for a bit: That last comment
doesn't make sense to me.

> Dependencies are hidden from
> the programmer, but not from the compiler.

That may be a "sound bite", but it's not connected to reality.

> G.B.:
> > 4) doesn't provide separate compilation benefits as in case of non-template
> > compilation units;
>
> H.S.: Exported templates are not truly "separately compiled" in the usual sense
> we mean when we apply that term to functions.
>
> P.J.P.: The benefits are pretty similar, though different.
>
> D.V.:Separately compiled exported templates provide several of the same benefits
> as separately compiled functions and variables.
>
> A.P.: I use term "separate compilation" in its original meaning: separate
> compilation of template instantiations into object files, and I feel
> myself more adequate
>
> J.K.: I don't understand what you are saying here.
>
> I think, I need to elaborate what I meant in the fourth point. When we speak
> about separate compilation of non-template functions, we expect that they are
> compiled to the binary code and their definitions are not longer needed. That
> brings many benefits, such as changes to the implementation require
> recompilation only that file; the source code cannot be deduced from the binary,
> the dependencies are contained in the implementation file, etc. That's not the
> case with exported templates. When an exported implementation changes, not only
> that file, but the instantiations of the template have to be recompiled. The
> other things were already mentioned in previous items.


If you change the definition of "separate compilation" as "compiled
fully
to native object code", then yes (short of Dave Abrahams neat idea)
templates are not likely to be "separately compiled" any time soon.

If you keep the more conventional notion of "compiled in a separate
phase, with the results directly usable elsewhere", there is no doubt
that exported templates can be separately compiled without requiring
innovative technology.

Daveed

kanze

unread,
Apr 21, 2006, 5:43:51 PM4/21/06
to
Gene Bushuyev wrote:
> "Gene Bushuyev" <sp...@spamguard.com> wrote in message
> news:AeW0g.66360$H71....@newssvr13.news.prodigy.com...
> [...]
>> Herb's is probably the most complete analysis we have thus
>> far. In particular, it demonstrates that "export" :

>> 1) doesn't hide the source code;
>> 2) doesn't speed up compilation; if anything the opposite is true;
>> 3) doesn't reduce dependencies, at best it hides them from the
>> programmer;
>> 4) doesn't provide separate compilation benefits as in case of non-
>> template
>> compilation units;
>> 5) is more difficult to use than non-exported templates;
> [...]

> So, all the big guns hazing a little guy :-)

Yah, EDG and Dinkumware are hazing Microsoft. I'd say you've
got the big guys and the little guys mixed up.

> Anyway, at the risk of seeming heretical, I still want to
> continue discussing this topic. Below I summarized the
> opinions expressed in this thread about the five points of
> template export, adding my comments.

> G.B.:
>> 1) doesn't hide the source code;

> Herb Sutter: source exposure for the definition remains.

> P.J. Plauger: You can hide source code without export and you
> can not hide it with export -- the issues are orthogonal.

> Daveed Vandevoorde: Exported templates can be compiled
> separately, after which the source code is no longer needed
> for the instantiation process.

> Andrei Polushin: `export' is not intended for that

> James Kanze: Who cares?

> Well, I mentioned hiding the source code above, because that's
> what many people expected in the beginning from "export" and
> undoubtly some are still expecting.

Who, and why? I'd never heard it mentionned with regards to
export before Herb's paper.

> Irreversible conversion of the source code to binary is
> important for library developers, it provides a reasonable
> protection of the intellectual property. People expected from
> "export" the same for templates, but it ain't gonna happen,
> the template definitions still must be available to the
> compiler at the point of instantiation. When D.V. mentioned
> "source code" no longer needed, it's definitely not the same
> as compilation into binary, because whatever the new form
> source code assumes, it is (easily) reversible; same problem
> as with .Net assemblies or Java bytecode. The reversible
> types of code hiding were tried before, there is nothing
> specific to "export."

As Plauger pointed out, the issue is orthogonal to export -- Sun
CC 4.2 supported source code hiding -- for their templates.
They've since abandonned it. And the use of supposedly readable
byte codes for Java and .Net doesn't seem to have affected their
popularity. And major library vendors are delivering their code
in source format today -- and it doesn't seem to bother them.

As I said: who cares? It's just not an issue.

> G.B.:
>> 2) doesn't speed up compilation; if anything the opposite is true;

> H.S.: it is unknown ... whether export-ized builds will in
> general be the same speed, faster, or slower in common
> real-world use.

> P.J.P.: We found that export is compilation speed neutral.

> D.V.: Separately compiled exported templates can contribute to
> drastically reduced compilation times

> A.P.: It does, for carefully constructed cases.

> J.K.: This is simply false. Try it.

> I guess, the answers indicate that we don't really know.

Or perhaps that it depends on the application.

> The real world testing hasn't been done yet, one reason for
> which is the lack of compiler support.

Except that people I know have used export. Plauger and myself
(and I think David as well) reported real world experiences.
They differ, so doubtlessly how much speed up you get will vary,
according to the application. In some cases, it will be a lot,
in others a little, and in still others, none at all.

> I would assume that significantly more work is required from
> complier just to do "export", e.g enforcing ODR can be a big
> task.

Independantly of export. Most compilers don't enforce it, and
that is that.

> And I don't see any savings exclusively due to "export" that
> couldn't be also achieved without it. With or without
> "export," compiler isn't required rebuilding all the files
> that use the template, it just needs to cover all
> instantiations.

Except that they don't. If you modify a template, the compiler
only rebuilds one of the files which use an instantiation, not
all of them.

> Pre-compiled header files can also reduce the burden of
> template parsing.

> G.B.:
>> 3) doesn't reduce dependencies, at best it hides them from
>> the programmer;

> H.S.: Remember that export only hides dependencies; it doesn't
> eliminate them.

> P.J.P.: Here you're wrong.

> D.V.: Exported templates often drastically reduce effective
> dependencies compared to included templates.

> A.P.: What does that /hides/ mean?

> J.K.: Again, this is simply false.

> Now, "wrong" and "false" aren't very helpful in resolving this
> issue.

Perhaps, but since implementations are available, some people
have concrete experience with export, and know what it can or
cannot do.

> When the template definition changes the other compilation
> units have to be recompiled also to cover all instantiations.
> Compiler has to see the definitions and all dependencies that
> are brought with them. Dependencies are hidden from the
> programmer, but not from the compiler.

Dependant names remain dependant, obviously. Non-dependant
names are NOT affected by anything I happen to include before
the header in which the template is defined. Which is not the
case without export.

> G.B.:
>> 4) doesn't provide separate compilation benefits as in case
>> of non-template compilation units;

> H.S.: Exported templates are not truly "separately compiled"
> in the usual sense we mean when we apply that term to
> functions.

> P.J.P.: The benefits are pretty similar, though different.

> D.V.:Separately compiled exported templates provide several of
> the same benefits as separately compiled functions and
> variables.

> A.P.: I use term "separate compilation" in its original
> meaning: separate compilation of template instantiations into
> object files, and I feel myself more adequate

> J.K.: I don't understand what you are saying here.

To be more precise, separate compilation means different things
to different people. Each thing with a separate set of
benefits.

> I think, I need to elaborate what I meant in the fourth point.
> When we speak about separate compilation of non-template
> functions, we expect that they are compiled to the binary code
> and their definitions are not longer needed.

For some definition of "binary code", I suppose.

> That brings many benefits, such as changes to the
> implementation require recompilation only that file; the
> source code cannot be deduced from the binary, the
> dependencies are contained in the implementation file, etc.
> That's not the case with exported templates. When an exported
> implementation changes, not only that file, but the
> instantiations of the template have to be recompiled. The
> other things were already mentioned in previous items.

The major importance of "separate compilation", and the problem
export was designed to address, can perhaps be categorized as
"independant compilation"; the fact that various includes, etc.
in my source file do not affect the compilation of the other
source files. Obviously, by definition, this cannot apply to
dependant names in a template. It does apply to non-dependant
names in an exported template, however.

The other advantage is, yes, that you don't have to recompile
every module which includes the template definition just because
you changed one small little implementation detail in the
template.

> G.B.:
>> 5) is more difficult to use than non-exported templates;

> H.S.: "export" complicates the C++ language and makes it
> trickier to use, including that export actually changes the
> fundamental meaning of parts of the language in surprising
> ways that it is not clear were foreseen.

> P.J.P.: Not really. It's just a different style of
> partitioning code.

> D.V.: That's really not quantifiable. My perception is that
> exported templates are easier to use because of the reduced
> concerns about inclusion-interference.

> A.P.: It is to be used properly, even if it is difficult for
> novices.

> J.K.: That's not been my experience -- just the opposite.

> This item is probably the most subjective one. What is
> surprising and unintuitive to some people is not at all
> surprising to others. Still more convoluted error messages
> are not to be ignored; with exported templates the
> instantiation paths can be pretty long and confusing.

I'll admit that I didn't notice this when I used export. But
I'll admit too that my tests involved converting working
templates to use export; as I was starting from working code, I
had very few error messages at all.

Are you saying that the error messages you've actually seen
using export are worse than those you get from any error in a
template?

> I'm sure we will have more definite answers after the majority
> of compiler vendors implement this feature and people start
> using it in the large projects.

Agreed. Until then, we'll just have to settle for what actual
experience we've got. None of which, to my knowledge, has been
negative.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

David Abrahams

unread,
Apr 21, 2006, 7:19:51 PM4/21/06
to
"Andrei Alexandrescu (See Website For Email)"
<SeeWebsit...@erdani.org> writes:

> Andrei Polushin wrote:
>> I wish my compiler had at least CFront-style functionality 7 years
>> ago. Having that, I would have been 99% satisfied, even for the next
>> seven years. The remaining 1% is a strict conformance to those
>> context-dependent name lookup rules, which are also in the standard.
>
> By the way, my current understanding of the situation is that
> context-dependent name lookup (aka two-phase name lookup)

....whew...

When people talk about eliminating 2-phase lookup, they are usually talking
about getting rid of the first phase, but the 2nd phase is just as
context-dependent!


> is the main
> (only?) significant problem wrt implementing export. Is that correct?

In what way could two-phase name lookup be a problem for export.

> If that's the case, it's not surprising that export is hard to
> implement; I believe context-dependent name lookup is a no-no for all
> good language design, and the arch enemy of modular reasoning.

Then you have to completely re-do C++. This has nothing to do with
templates:


// a.hpp
int f(int);

// b.hpp
long f(long);

// c.hpp
int g(int x) { return f(x); }

The meaning of c.hpp is completely context-dependent.

> So I am curious: what justified the existence of two-phase name
> lookup in the first place?

This has been discussed so many times that I'm surprised you don't
know the answer. The idea was to be able to detect errors in template
bodies at the point of declaration rather than at the point of
instantiation. You can't do that for the expressions involving
dependent names, but the other ones can use the ordinary lookup rules,
which take effect during phase 1.

> Would it be reasonable to phase out this dirty ("un-hygienic")
> feature?

Hehehehe. The first phase of two-phase name lookup is the hygienic
part. In fact, that's how it is usually justified!

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

vande...@gmail.com

unread,
Apr 21, 2006, 7:26:16 PM4/21/06
to

Andrei Alexandrescu (See Website For Email) wrote:
[...]

> By the way, my current understanding of the situation is that
> context-dependent name lookup (aka two-phase name lookup) is the main
> (only?) significant problem wrt implementing export. Is that correct?

No, I don't think it is (but I'm curious where that understanding
came from). We had implemented the two-phase rules for
included templates by the time we started export, and as far
as I remember, that code needed at most minor tweaking
for export.

The biggie is the lax ODR of C++ (and the slightly fuzzy linkage
rules). For example, if you have two TUs that define an included
template X, a class C, and your instantiating an exported template
across that boundary, you might have to check that e.g. X<C> is
equivalent on both sides. In the general case, that gets surprisingly
hard.


> If that's the case, it's not surprising that export is hard to
> implement; I believe context-dependent name lookup is a no-no for all
> good language design, and the arch enemy of modular reasoning.

What alternative do you propose? Removing phase 1 or removing
phase 2? Or something else entirely?

> So I am
> curious: what justified the existence of two-phase name lookup in the
> first place? Would it be reasonable to phase out this dirty
> ("un-hygienic") feature?

"The Design and Evolution of C++" by Bjarne Stroustrup talks about that
in section 15.10.2 (and mention 1993 as the time that notion was
introduced; export came in 1996).

Daveed

vande...@gmail.com

unread,
Apr 21, 2006, 7:20:50 PM4/21/06
to

wka...@yahoo.com wrote:
[...]

> Couldn't include files also be compiled to a form of object code? C++
> compilers typically use bottom-up parsing (I think).

(Most use recursive descent parsing, these days.)

> So the fact
> that (in general) you can't build a single parse tree (with start
> symbol
> at the root) for an include file (whereas you can for an implementation
> unit containing exported template(s)) is not that significant.

A major problem is the preprocessor: An earlier include
file can fundamentally change the nature of declarations
in later files.

> I know that (at least at one time), the Borland compiler "pre-
> compiled" include files, but I think maybe it just tokenized.

That kind of precompilation involves taking a snapshot of the
compiler's internal state, and reloading that snapshot in
subsequent compilations. It only works if the subsequent
compilations start with exactly the same code as the first
one. So if you have 3 translation units that start with:

// TU #1:
#include <vector>
#include <list>

// TU #2:
#include <vector>
#include <iostream>

// TU #3:
#include <list>
#include <vector>

The compiler can same a snapshot right after the first #include
of TU #1 is processed, and that can be reused instead of
parsing the first #include in TU #2. However, in TU#3, that
snapshot cannot be reused, because the inclusion of
#include <list> might have changed the meaning of
#include <vector>.

Daveed

David Abrahams

unread,
Apr 21, 2006, 7:25:54 PM4/21/06
to
"wka...@yahoo.com" <wka...@yahoo.com> writes:

> David Abrahams wrote:
>> vande...@gmail.com writes:
>>
>>>> 2) doesn't speed up compilation; if anything the opposite is true;
>>>
>>> Separately compiled exported templates can contribute to
>>> drastically reduced compilation times (primarily because the
>>> alternative of re-parsing included templates is so expensive).
>>
>> Theoretically, exported templates can be compiled down to object code
>> -- not the usual object code that gets executed at runtime, but object
>> code that gets executed at compile time -- code that instantiates the
>> templates. That could potentially have a major impact on compilation
>> speed of code using those templates. It would be the difference
>> between an interpreted language (what compilers do now to instantiate
>> templates) and a compiled one.
> ...
>
> Couldn't include files also be compiled to a form of object code?

What would that code *do*?

> C++ compilers typically use bottom-up parsing (I think).

Not really; it varies. The only one I know of that does that is VC++.
G++ uses recursive descent last I heard.

> So the fact that (in general) you can't build a single parse tree
> (with start symbol at the root) for an include file (whereas you can
> for an implementation unit containing exported template(s)) is not
> that significant.

Oh, yes it is. The problem is that parsing C++ is highly context
sensitive. You can't parse

x < y > z::foo

unless you know whether x is a template or an int variable, for
example.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

wka...@yahoo.com

unread,
Apr 21, 2006, 7:57:08 PM4/21/06
to

Andrei Alexandrescu (See Website For Email) wrote:
> Andrei Polushin wrote:
> > I wish my compiler had at least CFront-style functionality 7 years
> > ago. Having that, I would have been 99% satisfied, even for the next
> > seven years. The remaining 1% is a strict conformance to those
> > context-dependent name lookup rules, which are also in the standard.
>
> By the way, my current understanding of the situation is that
> context-dependent name lookup (aka two-phase name lookup) is the main
> (only?) significant problem wrt implementing export. Is that correct?
>
> If that's the case, it's not surprising that export is hard to
> implement; I believe context-dependent name lookup is a no-no for all
> good language design, and the arch enemy of modular reasoning. So I am
> curious: what justified the existence of two-phase name lookup in the
> first place? Would it be reasonable to phase out this dirty
> ("un-hygienic") feature?
...

Can you see a way of eliminating context-dependent name lookup that
still allows for the technique of writing templates that depend on
overloaded functions or template specializations that involve templete
type parameters? Or are there better alternatives to this technique?

wka...@yahoo.com

unread,
Apr 21, 2006, 7:56:46 PM4/21/06
to

kanze wrote:
> wka...@yahoo.com wrote:
...

> > I'm not a compiler developer, but it seems like a good
> > "export" implementation could be extended to support a
> > general, "include-file-free" import/export capability, along
> > the lines of Modula 2 or Java. I would guess there are enough
> > of us who would want to use such a capability that it would be
> > profitable for compiler makers to implement it.
>
> And there is currently a proposal to do so before the C++
> standardization committee. By none other than the person who
> actually wrote the one existing implementation of export.
>
> The problem with such proposals is in the details. I've read it
> (quickly), and it seems just what the doctor ordered. But until
> some one has actually used it, there's just no way of knowing if
> there aren't some hidden subtilities which makes it difficult,
> or even impossible, to use in a practical situation. So we have
> a bit of a vicious circle: it's not reasonable to adopt it into
> the standard without some experience with it, to get that
> experience, we need an implementation, and the implementers
> won't implement it until it has been adopted into the standard.

I can see how the implementation of such a capability would
be non-trivial. But not so much the specification of it. To
throw out a straw man, how about:

namespace abc { extern "import" x, y, z; }

to import individual identifiers, and:

namespece abc { extern "import"; }

to import an entire namespace. The dependencies of imported
names could be handled like public class names that are
dependent on private class names. For example,
this (currently valid C++ code):

class A
{
private: struct AA { int a,b; } aa;
public: AA* m(void) { return(&aa); }
};

int foo(A *a) { return(a->m()->a); }

would be analogous to:

A.cpp:

struct AA { int a, b; } aa;
AA *m(void) { return(aa); };

foo.cpp:

extern "import" m;

int foo(void) { return(m()->a); }

That is to say, the importing compilation unit can make any use of the
referenced definition (of struct AA in this case) that doesn't require
explicitly using its name.

wka...@yahoo.com

unread,
Apr 22, 2006, 1:02:25 PM4/22/06
to
David Abrahams wrote:
> "wka...@yahoo.com" <wka...@yahoo.com> writes:
...

> > So the fact that (in general) you can't build a single parse tree
> > (with start symbol at the root) for an include file (whereas you can
> > for an implementation unit containing exported template(s)) is not
> > that significant.
>
> Oh, yes it is. The problem is that parsing C++ is highly context
> sensitive. You can't parse
>
> x < y > z::foo
>
> unless you know whether x is a template or an int variable, for
> example.
>
...

If the common practice of having an include file resolve its own
dependencies (with sub-include files if necessary) is followed,
then meaningful parsing should be possible. By creating
pointer aliases, you can prevent the optimization of binding
certain variables to registers, but that doesn't mean
this type of optimization is impractical or undesirable.
Optimizations that are only possible if good or typical
programming practice is followed can still be useful.

Andrei Alexandrescu (See Website For Email)

unread,
Apr 22, 2006, 1:02:47 PM4/22/06
to
vande...@gmail.com wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
> [...]
>
>>By the way, my current understanding of the situation is that
>>context-dependent name lookup (aka two-phase name lookup) is the main
>>(only?) significant problem wrt implementing export. Is that correct?
>
>
> No, I don't think it is (but I'm curious where that understanding
> came from). We had implemented the two-phase rules for
> included templates by the time we started export, and as far
> as I remember, that code needed at most minor tweaking
> for export.
>
> The biggie is the lax ODR of C++ (and the slightly fuzzy linkage
> rules). For example, if you have two TUs that define an included
> template X, a class C, and your instantiating an exported template
> across that boundary, you might have to check that e.g. X<C> is
> equivalent on both sides. In the general case, that gets surprisingly
> hard.

I think the two go together. Or at least I thought. My understanding in
the matter is rather fuzzy, and I'd be glad to defuzzify it.

So, my curent understading is that X<C> is hard to figure out properly
because the instantiation X<C> embeds dependencies on where the
instantiation is being made (what files have been included etc). My
overall opinion is that that lack of hygiene is the root problem, which
should not only be addressed - it should be eradicated. In other words,
whenever and wherever one says X<C>, the entire type X<C> should have a
clear anchor of its whereabouts.

I was under the probably mistaken that two-phase name lookup has
everything to do about the lack of hygiene, but I think I'm wrong.

Andrei

Walter Bright

unread,
Apr 22, 2006, 7:57:00 PM4/22/06
to
vande...@gmail.com wrote:

> Gene Bushuyev wrote:
>> Irreversible conversion of the source code to binary is important for library
>> developers, it provides a reasonable protection of the intellectual property.
>> People expected from "export" the same for templates, but it ain't gonna happen,
>> the template definitions still must be available to the compiler at the point of
>> instantiation.
> Just like object code must be available at link time.

It's not that analogous. Typical object code has thrown away a large
part of the semantic information of the source code, but precompiled
templates cannot.


> I suspect a compiled exported template will always contain more
> symbolic info
> than your typical object code (except maybe when using Dave Abrahams
> idea elsewhere in this thread).

I don't see how it couldn't contain quite a bit more. In order to be
able to plug in unknown values and types into an expression and
declarations, one needs to retain pretty much all of the semantic
information in declarations, statements, and expressions.

> However, to say that it would be easily reversible
> is a stretch unless you can back it up with solid arguments.

One argument is to look at the mechanical tools that can turn Java
bytecode back into reasonable source code. Since there is less semantic
information in Java bytecode than one would expect to be in precompiled
templates, it should be easier to reverse than Java bytecode, and the
latter is a solved problem.


> Those have well-documented representations, so it's easier to do.

Security by obscurity is well known to not be an effective encryption
technique. All it takes is one person to crack it, and then the solution
is available to anyone. Even worse, if a library vendor supports
multiple compilers, then the security is only as good as the *weakest*
encryption technique used by any of them.


> Even so, byte code is apparently not easy enough to keep companies from
> distributing their software in that form (whereas quite a few vendors
> have
> expressed that they won't templatize their code because it would - e.g.
> -
> expose their particular numerical algorithms).

Since there are available automated tools to turn Java bytecode back
into source, I don't understand how it could be any easier.

(I should also point out that numerical code expressed as object code is
a lot easier to reverse engineer than other types of code - one reason
for that is the x87 FPU instructions are a stack machine, much like Java
bytecode <g>.)


> (Now, I'm not saying that "export" is the only means to achieve the
> goal.
> After all, some vendors seem to be happy to distribute their templates
> by
> just mechanically "uglifying" their source. That approach is easier to
> reverse than byte codes still, but it seems acceptable to many. For
> those
> who _really_ care, even object code is not good enough, after all.)

Much of the value in source code is actually in the comments. Just using
a comment stripper will make a lot of source code fairly useless to
anyone else <g>.

But you appear to be placing precompiled templates as being between byte
code and object code, and closer to the object code side. I would place
byte code between precompiled templates and object code.

> Again, have you implemented export to be able to make such
> a statement?

I haven't implemented C++ export. But I know my way around implementing
templates, and have given a lot of thought to how to make export work. I
have also implemented "exported" templates in the D programming language
(actually, they fall out of the module system D uses, and so require no
extra effort) (*).

> PCHs are also considerable more
> expensive to create that compiled exported templates,

I don't see how. While DMC++'s precompiled headers were complicated to
implement, their performance is gated by the actual file I/O time, not
by computing them.

> and the
> conditions for their reuse are quite constraining.

Although the conditions appear to be constraining, in practice, they are
not. Especially if you design the headers in your project to behave like
modules.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers

(*) D templates are stored internally as AST's (Abstract Syntax Trees).
When instantiated, the semantic analysis is done. Template arguments are
looked up in the context of the instantiation (just like function
arguments are). The semantic analysis of the instantiated template is
done in the context of the template definition (just like for function
definitions).

With the AST and the context, source can be fairly easily reconstituted.

Andrei Polushin

unread,
Apr 22, 2006, 7:56:12 PM4/22/06
to
Daveed Vandevoorde wrote:

> Andrei Alexandrescu wrote:
>> By the way, my current understanding of the situation is that
>> context-dependent name lookup (aka two-phase name lookup) is the main
>> (only?) significant problem wrt implementing export. Is that correct?
>
> No, I don't think it is (but I'm curious where that understanding
> came from). We had implemented the two-phase rules for
> included templates by the time we started export, and as far
> as I remember, that code needed at most minor tweaking
> for export.
>
> The biggie is the lax ODR of C++ (and the slightly fuzzy linkage
> rules). For example, if you have two TUs that define an included
> template X, a class C, and your instantiating an exported template
> across that boundary, you might have to check that e.g. X<C> is
> equivalent on both sides. In the general case, that gets surprisingly
> hard.

Probably, you were stuck too deeply in the problem when you realize its
existence. And ODR was not the actual problem; it is a consequence of
how the `export' itself were defined.

With the context-dependent name lookup, you need to be aware of all
contexts (all translation units) where the particular instantiation
takes place. If you are combining contexts from all such TUs, then you
need to check the ODR for them, but only as a consequence.

But *combining contexts* contradicts *separate compilation*, for which
the `export' was originally intended for.

You need to squeeze "the relevant context" out of all translation units
where the instantiation points appear. It's hard to define what is to
appear in that "relevant context", hard to squeeze it out (I cannot
imagine how you do that without re-translation - do you?), it's hard
to combine them all together - and the ODR check is the one of the many
problems to be solved. In addition, there is no possibility for partial
implementation - only a complete implementation is somehow useful.

As a result, it's hard to implement, and that is a sad consequence
for many of us.


>> If that's the case, it's not surprising that export is hard to
>> implement; I believe context-dependent name lookup is a no-no for all
>> good language design, and the arch enemy of modular reasoning.
>
> What alternative do you propose? Removing phase 1 or removing
> phase 2? Or something else entirely?

Removing the 3rd phase. Not a joke: there should be no lookup across
translation units. You may ask a user to form a context explicitly,
e.g. when compiler instantiates X<C> it should make user responsible
for specifying which file contains

1) the definition of template X<>
2) the declaration of type C (thus we can ignore ODR check)
3) optionally, some additional context for instantiating X<C>

This could be done in the form of classname-to-filename binding rules
(Cfront style), or by any other means you propose. The solution differs
from EDG's in that you are probably working against the preprocessor,
instead of making it your friend.

I suppose it can be better specified, when we will have better
modules (like N1964 or such), but for now our modules are *files* -
we use them to support modular programming paradigm, so the files
might be employed to implement `export', too.

I do not insist on that as the only solution, and I don't think it is
ideal, but it might be much easier to implement, and that might be
enough for me. I expect the other solutions might be even more useful.

--
Andrei Polushin

Walter Bright

unread,
Apr 22, 2006, 8:01:32 PM4/22/06
to
Andrei Alexandrescu (See Website For Email) wrote:
> By the way, my current understanding of the situation is that
> context-dependent name lookup (aka two-phase name lookup) is the main
> (only?) significant problem wrt implementing export. Is that correct?

DMC++ implements two phase name lookup, and I found a way to do it that
wasn't that hard. I wish that were all there was to implementing export,
but not a chance of that.

> If that's the case, it's not surprising that export is hard to
> implement; I believe context-dependent name lookup is a no-no for all
> good language design, and the arch enemy of modular reasoning.
> So I am
> curious: what justified the existence of two-phase name lookup in the
> first place?

I see it as analogous to functions: function arguments are looked up at
the call (point of instantiation) and the function body names are looked
up at the point of definition. It makes sense that templates should work
the same way, but where C++ got bizarre is when weird rules were added
like you cannot redeclare template parameters even within nested local
scopes, base member names are not found, and some other surprises.

James Kanze

unread,
Apr 22, 2006, 8:09:17 PM4/22/06
to
wka...@yahoo.com wrote:
> David Abrahams wrote:
>> "wka...@yahoo.com" <wka...@yahoo.com> writes:
> ...
>>> So the fact that (in general) you can't build a single parse tree
>>> (with start symbol at the root) for an include file (whereas you can
>>> for an implementation unit containing exported template(s)) is not
>>> that significant.
>> Oh, yes it is. The problem is that parsing C++ is highly context
>> sensitive. You can't parse

>> x < y > z::foo

>> unless you know whether x is a template or an int variable, for
>> example.

> ...

> If the common practice of having an include file resolve its
> own dependencies (with sub-include files if necessary) is
> followed, then meaningful parsing should be possible.

A compiler has to be able to handle all correct programs
(according to the standard), not just well written ones.

> By creating pointer aliases, you can prevent the optimization
> of binding certain variables to registers, but that doesn't
> mean this type of optimization is impractical or undesirable.
> Optimizations that are only possible if good or typical
> programming practice is followed can still be useful.

Providing you can handle the alternative cases as well. (I've
seen this in optimizers : in one case I've seen, any violation
of SESE caused the optimizer to give up completely. Presumably,
the author of this compiler felt even stronger about SESE than I
do:-).)

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

David Abrahams

unread,
Apr 22, 2006, 8:02:40 PM4/22/06
to
vande...@gmail.com writes:

> Andrei Alexandrescu (See Website For Email) wrote:
> [...]
>> By the way, my current understanding of the situation is that
>> context-dependent name lookup (aka two-phase name lookup) is the main
>> (only?) significant problem wrt implementing export. Is that correct?
>
> No, I don't think it is (but I'm curious where that understanding
> came from). We had implemented the two-phase rules for
> included templates by the time we started export, and as far
> as I remember, that code needed at most minor tweaking
> for export.
>
> The biggie is the lax ODR of C++ (and the slightly fuzzy linkage
> rules). For example, if you have two TUs that define an included
> template X, a class C, and your instantiating an exported template
> across that boundary, you might have to check that e.g. X<C> is
> equivalent on both sides.
> In the general case, that gets surprisingly
> hard.

Why would you have to check? Does export come with a mandate to
detect and report ODR violations?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

James Kanze

unread,
Apr 22, 2006, 8:08:33 PM4/22/06
to

I'm not too worried about the implementation. The author of the
proposal understands the issues, and it's not as if it were
really anything new -- Modula-2 had it more than 20 years ago.

> But not so much the specification of it.

The problem is in the details. Making it work with the current
language definition, in a way that is actually usable. What
looks eminently usable on paper isn't always in practice.

> To throw out a straw man, how about:

As I said, there is already a proposal -- it's in its third
revision. What is needed now isn't another proposal; it's an
implementation of the current one, so we can play around with it
some.

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

James Kanze

unread,
Apr 22, 2006, 8:15:29 PM4/22/06
to
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)"

[...]


>> So I am curious: what justified the existence of two-phase name
>> lookup in the first place?

> This has been discussed so many times that I'm surprised you don't
> know the answer. The idea was to be able to detect errors in template
> bodies at the point of declaration rather than at the point of
> instantiation. You can't do that for the expressions involving
> dependent names, but the other ones can use the ordinary lookup rules,
> which take effect during phase 1.

That's only part of it. Another major motivation, as I
understand it, was to avoid name hijacking -- a non-dependant
name will not resolve to something unexpected because of the
instantiation context.

Of course, this really requires export -- without export, two
phase lookup really doesn't make much sense.

--
James Kanze kanze...@neuf.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

James Kanze

unread,
Apr 22, 2006, 8:09:58 PM4/22/06
to
David Abrahams wrote:
> "wka...@yahoo.com" <wka...@yahoo.com> writes:

>> David Abrahams wrote:
>>> vande...@gmail.com writes:

>>>>> 2) doesn't speed up compilation; if anything the opposite is true;
>>>> Separately compiled exported templates can contribute to
>>>> drastically reduced compilation times (primarily because the
>>>> alternative of re-parsing included templates is so expensive).
>>> Theoretically, exported templates can be compiled down to object code
>>> -- not the usual object code that gets executed at runtime, but object
>>> code that gets executed at compile time -- code that instantiates the
>>> templates. That could potentially have a major impact on compilation
>>> speed of code using those templates. It would be the difference
>>> between an interpreted language (what compilers do now to instantiate
>>> templates) and a compiled one.
>> ...

>> Couldn't include files also be compiled to a form of object code?

> What would that code *do*?

What does object code do? A header can certainly be compiled to
some form of object code, given that any definition of object
code is implementation specific. Most modern compilers use some
form of byte code, like Java, I think, in order to allow
inter-module optimization at link time.

The problem with pre-compiling headers into something similar is
really macros -- the meaning of most headers sort of changes if
you precede them with a "#define int double", for example.
(Note all of the restrictions the standard has to introduce, and
the funny names implementations of the standard library use
internally, just to ensure that the meaning won't change.)

>> C++ compilers typically use bottom-up parsing (I think).

> Not really; it varies. The only one I know of that does that
> is VC++. G++ uses recursive descent last I heard.

It seems to go by periods. Bottom-up was popular for awhile,
but fell out of favor -- I think in part because it is very
difficult to get good error messages from bottom-up parsers, but
also, perhaps, because bottom-up parsers are almost always
machine generated, and languages like C++ need some special
tweaking, which doesn't obey compiler theory, and is really only
doable in a hand written parser. (I wonder if flex's parallel
parsing facility would help here.)

>> So the fact that (in general) you can't build a single parse
>> tree (with start symbol at the root) for an include file
>> (whereas you can for an implementation unit containing
>> exported template(s)) is not that significant.

> Oh, yes it is. The problem is that parsing C++ is highly
> context sensitive. You can't parse

> x < y > z::foo

> unless you know whether x is a template or an int variable,
> for example.

But you can maintain two alternate parses. On input, you check
which assumption holds, and use the corresponding parse tree.
(And yes, it is a lot more complicated than it sounds in what I
just said.)

--
James Kanze kanze...@neuf.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Apr 22, 2006, 8:21:23 PM4/22/06
to
"Andrei Alexandrescu (See Website For Email)"
<SeeWebsit...@erdani.org> writes:

> So, my curent understading is that X<C> is hard to figure out properly
> because the instantiation X<C> embeds dependencies on where the
> instantiation is being made (what files have been included etc). My
> overall opinion is that that lack of hygiene is the root problem, which
> should not only be addressed - it should be eradicated. In other words,
> whenever and wherever one says X<C>, the entire type X<C> should have a
> clear anchor of its whereabouts.

Then you need to eliminate header files.

Instituting a proper module system would be a step in the right
direction, but by itself, it doesn't amount to eradication. You need
to break every C++ program currently in existence to achieve that.

> I was under the probably mistaken that two-phase name lookup has
> everything to do about the lack of hygiene, but I think I'm wrong.

No, it lessens the lack of hygiene.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Apr 22, 2006, 8:25:10 PM4/22/06
to
"wka...@yahoo.com" <wka...@yahoo.com> writes:

> David Abrahams wrote:
>> "wka...@yahoo.com" <wka...@yahoo.com> writes:
> ...
>> > So the fact that (in general) you can't build a single parse tree
>> > (with start symbol at the root) for an include file (whereas you can
>> > for an implementation unit containing exported template(s)) is not
>> > that significant.
>>
>> Oh, yes it is. The problem is that parsing C++ is highly context
>> sensitive. You can't parse
>>
>> x < y > z::foo
>>
>> unless you know whether x is a template or an int variable, for
>> example.
>>
> ...
>
> If the common practice of having an include file resolve its own
> dependencies (with sub-include files if necessary) is followed,
> then meaningful parsing should be possible.

Okay, yes, of a complete header file with all its dependencies.

> By creating pointer aliases, you can prevent the optimization of
> binding certain variables to registers,

Whaa?

> but that doesn't mean this type of optimization is impractical or
> undesirable. Optimizations that are only possible if good or
> typical programming practice is followed can still be useful.

I'll ask again:

David Abrahams <da...@boost-consulting.com> writes:


>> "wka...@yahoo.com" <wka...@yahoo.com> writes:
>>
>> Couldn't include files also be compiled to a form of object code?
>
> What would that code *do*?

It's only an optimization if you can say what the code would do.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Apr 23, 2006, 2:40:41 PM4/23/06
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:

> where C++ got bizarre is when weird rules were added
> like you cannot redeclare template parameters even within nested local

> scopes, base member names are not found,...

of course they are found, if the base is non-dependent, and dependent
base member names _can't_ be found

char const* foo = "foo";

template <class B>
struct D : B
{
D() { foo = 1; } // B has a member named foo?
};

And if you don't like that example, try:

long x;

template <class T>
struct base
{
int x;
};

template <class T>
struct derived : base<T>
{
derived() { x = 1; }
};

template <>
struct base<char*>
{
// no data member named x
};

derived<char*> y;

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
Apr 23, 2006, 2:54:44 PM4/23/06
to
James Kanze wrote:

> wka...@yahoo.com wrote:
>>By creating pointer aliases, you can prevent the optimization
>>of binding certain variables to registers, but that doesn't
>>mean this type of optimization is impractical or undesirable.
>>Optimizations that are only possible if good or typical
>>programming practice is followed can still be useful.
>
>
> Providing you can handle the alternative cases as well. (I've
> seen this in optimizers : in one case I've seen, any violation
> of SESE caused the optimizer to give up completely. Presumably,
> the author of this compiler felt even stronger about SESE than I
> do:-).)

Let me guess: the compiler was distributed on 360 KB floppy disks.
Because today, such a compiler would be quite useless in wake of
exceptions - just as the SESE way of writing code itself.

Andrei

David Abrahams

unread,
Apr 23, 2006, 2:58:06 PM4/23/06
to
James Kanze <kanze...@neuf.fr> writes:

> David Abrahams wrote:
>> "Andrei Alexandrescu (See Website For Email)"
>
> [...]
>>> So I am curious: what justified the existence of two-phase name
>>> lookup in the first place?
>
>> This has been discussed so many times that I'm surprised you don't
>> know the answer. The idea was to be able to detect errors in template
>> bodies at the point of declaration rather than at the point of
>> instantiation. You can't do that for the expressions involving
>> dependent names, but the other ones can use the ordinary lookup rules,
>> which take effect during phase 1.
>
> That's only part of it. Another major motivation, as I
> understand it, was to avoid name hijacking -- a non-dependant
> name will not resolve to something unexpected because of the
> instantiation context.

Yeah, IIRC, you're right.

> Of course, this really requires export

Of course?

> -- without export, two phase lookup really doesn't make much sense.

I can't understand why you'd say that. It makes a lot of sense to me,
and I've never used export.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter Bright

unread,
Apr 24, 2006, 4:08:11 PM4/24/06
to
David Abrahams wrote:
> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>> where C++ got bizarre is when weird rules were added
>> like you cannot redeclare template parameters even within nested local
>> scopes, base member names are not found,...
> of course they are found, if the base is non-dependent,

That is true, I should have mentioned that. Here's an excerpt from my
template talk at SDWest on the strange rules (example adapted from the
C++98 Standard):

int g(double d) { return 1; }

typedef double A;

template<class T> B
{
typedef int A;
};

template<class T> struct X : B<T>
{
A a; // a has type double
int T; // error, T redeclared
int foo()
{ char T; // error, T redeclared
return g(1); // always returns 1
}
};

int g(int i) { return 2; } // this definition not seen by X


> and dependent base member names _can't_ be found

There is no technical reason why they cannot be. I had to actually
*break* finding them in DMC++ so it would be standard conformant.
Furthermore, in D, the template lookup rules actually work as one would
expect. The above example code written in D would be:

int g(double d) { return 1; }

typedef double A;

class B(T)
{
typedef int A;
}

class X(T) : B!(T)
{
A a; // a has type int
int T; // ok, T redeclared as int
int foo()
{ char T; // ok, T redeclared as char
return g(1); // always returns 2
}
};

int g(int i) { return 2; } // functions can be forward referenced


-Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

kanze

unread,
Apr 24, 2006, 4:03:13 PM4/24/06
to
David Abrahams wrote:
> James Kanze <kanze...@neuf.fr> writes:

> > David Abrahams wrote:
> >> "Andrei Alexandrescu (See Website For Email)"

> > [...]
> >>> So I am curious: what justified the existence of two-phase
> >>> name lookup in the first place?

> >> This has been discussed so many times that I'm surprised
> >> you don't know the answer. The idea was to be able to
> >> detect errors in template bodies at the point of
> >> declaration rather than at the point of instantiation. You
> >> can't do that for the expressions involving dependent
> >> names, but the other ones can use the ordinary lookup
> >> rules, which take effect during phase 1.

> > That's only part of it. Another major motivation, as I
> > understand it, was to avoid name hijacking -- a
> > non-dependant name will not resolve to something unexpected
> > because of the instantiation context.

> Yeah, IIRC, you're right.

> > Of course, this really requires export

> Of course?

> > -- without export, two phase lookup really doesn't make much sense.

> I can't understand why you'd say that. It makes a lot of sense to me,
> and I've never used export.

To really prevent name hijacking, you need export -- two phase
lookup without export only does a partial job, since the
implementation still sees names injected before the definition
of the template, i.e. any headers or definitions you might
provide before including the header with the template.

If the goal of two phased lookup is to allow clear error
messages, it only partially succeeded. At least, the error
messages I see when I make a mistake in a template are usually
far from clear (but this does depend on the type of error -- for
some types of errors, there is a definite improvement).

If the goal is to prevent name hijacking, then you really need
export as well.

--
James Kanze GABI Software

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Apr 24, 2006, 4:02:26 PM4/24/06
to
Andrei Alexandrescu (See Website For Email) wrote:
> James Kanze wrote:
> > wka...@yahoo.com wrote:
> >>By creating pointer aliases, you can prevent the
> >>optimization of binding certain variables to registers, but
> >>that doesn't mean this type of optimization is impractical
> >>or undesirable. Optimizations that are only possible if good
> >>or typical programming practice is followed can still be
> >>useful.

> > Providing you can handle the alternative cases as well.
> > (I've seen this in optimizers : in one case I've seen, any
> > violation of SESE caused the optimizer to give up
> > completely. Presumably, the author of this compiler felt
> > even stronger about SESE than I do:-).)

> Let me guess: the compiler was distributed on 360 KB floppy
> disks.

On mag tape, actually, I think. It goes back to before floppy
disks were widespread.

> Because today, such a compiler would be quite useless in wake
> of exceptions - just as the SESE way of writing code itself.

I wasn't a C++ compiler, so the problem didn't come up:-).

As I said, even I wouldn't go that far. My point was just that
optimizers can be totally arbitrary about what they optimize, as
long as the compiler still handles all legal code in some way.
Presumably, the compiler could even intentionally pessimize
code that wasn't conform to the authors' ideas of good code.
In practice, I doubt such a compiler would find a wide market.

--
James Kanze GABI Software

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Walter Bright

unread,
Apr 24, 2006, 4:06:16 PM4/24/06
to
Andrei Alexandrescu (See Website For Email) wrote:
> James Kanze wrote:
>> wka...@yahoo.com wrote:
>>> By creating pointer aliases, you can prevent the optimization
>>> of binding certain variables to registers, but that doesn't
>>> mean this type of optimization is impractical or undesirable.
>>> Optimizations that are only possible if good or typical
>>> programming practice is followed can still be useful.
>>
>> Providing you can handle the alternative cases as well. (I've
>> seen this in optimizers : in one case I've seen, any violation
>> of SESE caused the optimizer to give up completely. Presumably,
>> the author of this compiler felt even stronger about SESE than I
>> do:-).)
>
> Let me guess: the compiler was distributed on 360 KB floppy disks.
> Because today, such a compiler would be quite useless in wake of
> exceptions - just as the SESE way of writing code itself.

There's also no reason to even write an optimizer that way. Algorithms
that don't require SESE, and don't compromise on results, have been
around since at least the 1970's.

[Anecdote: back in the 80's, when dinosaurs ruled the earth, I was on a
C compiler vendor panel at SDWest. The first question asked was "do you
have a version of the compiler that will work on a floppy disk only
computer?" Down the line, each compiler vendor pontificated on how they
had a special configuration that would work on floppies. Then it got to
me. I said that our floppy only compiler version cost $200 extra and
came with a hard disk. That was the end of that, and the question never
came up again <g>.]

It wasn't until 1992 or 93, though, that we were finally able to abandon
distributing the compiler on floppies, when CD drives became ubiquitous
enough. And just in time, too, juggling a dozen or more floppies was a
big pain for everyone.

Even so, the very first data flow analysis optimizing compiler I wrote,
back in 1985 or so, did not rely on SESE (or even care about it).

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

vande...@gmail.com

unread,
Apr 24, 2006, 4:10:14 PM4/24/06
to

David Abrahams wrote:
> vande...@gmail.com writes:
>
> > Andrei Alexandrescu (See Website For Email) wrote:
> > [...]
> >> By the way, my current understanding of the situation is that
> >> context-dependent name lookup (aka two-phase name lookup) is the main
> >> (only?) significant problem wrt implementing export. Is that correct?
> >
> > No, I don't think it is (but I'm curious where that understanding
> > came from). We had implemented the two-phase rules for
> > included templates by the time we started export, and as far
> > as I remember, that code needed at most minor tweaking
> > for export.
> >
> > The biggie is the lax ODR of C++ (and the slightly fuzzy linkage
> > rules). For example, if you have two TUs that define an included
> > template X, a class C, and your instantiating an exported template
> > across that boundary, you might have to check that e.g. X<C> is
> > equivalent on both sides.
> > In the general case, that gets surprisingly
> > hard.
>
> Why would you have to check? Does export come with a mandate to
> detect and report ODR violations?

"export" doesn't specify more required ODR diagnostics than other
language constructs (generally, that means that within-TU ODR
violations must be diagnosed, but across-TU ODR violations don't
have to be diagnosed).

However, I did not see an easy way to gracefully recover from errors
(i.e., without generating completely meaningless internal structures
that would eventually lead to a crash) without doing the ODR checks.
It might be possible to avoid that with some alternative C++ front end
designs, but my guess is that such designs will likely pay a
significant
performance cost. I might be wrong (e.g. blinded by the traditional
intermediate language approaches).

Daveed

wka...@yahoo.com

unread,
Apr 24, 2006, 4:16:25 PM4/24/06
to

David Abrahams wrote:
> "wka...@yahoo.com" <wka...@yahoo.com> writes:

I must not be understanding or interpreting you questions correctly,
because I'm sure my answers are not telling you anything you
don't already know.

> > David Abrahams wrote:
> >> "wka...@yahoo.com" <wka...@yahoo.com> writes:

> > By creating pointer aliases, you can prevent the optimization of
> > binding certain variables to registers,
>
> Whaa?

for (int i = 0; i < 100; i++)
foo(&i);

Have to write i to mem before and read i from mem after each call to
foo in the loop. So memory storage of the index i cannot be optimized
away, as it typically would be in a small "for" loop.

...


> I'll ask again:
>
> David Abrahams <da...@boost-consulting.com> writes:
> >> "wka...@yahoo.com" <wka...@yahoo.com> writes:
> >>
> >> Couldn't include files also be compiled to a form of object code?
> >
> > What would that code *do*?
>
> It's only an optimization if you can say what the code would do.

...

To clarify, optimization here means optimization of compile time
not execution time. The object/intermediate code would (at a
minimum) be a representation of the tokenized include file source
code. Hopefully it would capture some/all parse results. In
some cases, it could contain machine language object code.
When the compilation unit that (directly or indirectly) included
the header was compiled, the header file object code would
be further translated to machine language object code.

David Abrahams

unread,
Apr 25, 2006, 3:25:57 PM4/25/06
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:

> David Abrahams wrote:
>> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>>> where C++ got bizarre is when weird rules were added
>>> like you cannot redeclare template parameters even within nested local
>>> scopes, base member names are not found,...
>> of course they are found, if the base is non-dependent,
>
> That is true, I should have mentioned that. Here's an excerpt from my
> template talk at SDWest on the strange rules (example adapted from the
> C++98 Standard):
>
> int g(double d) { return 1; }
>
> typedef double A;
>
> template<class T> B
> {
> typedef int A;
> };
>
> template<class T> struct X : B<T>
> {
> A a; // a has type double
> int T; // error, T redeclared
> int foo()
> { char T; // error, T redeclared
> return g(1); // always returns 1
> }
> };
>
> int g(int i) { return 2; } // this definition not seen by X
>
>
>> and dependent base member names _can't_ be found
>
> There is no technical reason why they cannot be.

Of course there is. Did you see the example in my previous posting?
It looks as though you completely ignored it. Here's yet another
example (though you should really go back and look at the others,
too):

template <class T>
struct base;

template <class T>
struct derived : base<T>
{

A a; // what type does a have?
};

template <class T>
struct base<T*>
{
typedef int A;
};

template <class T, class U>
struct base<T(U)>
{
typedef char* A;
};

> I had to actually *break* finding them in DMC++ so it would be
> standard conformant.

Then, IMO, you were missing something to begin with.

> Furthermore, in D, the template lookup rules
> actually work as one would expect. The above example code written in
> D would be:
>
> int g(double d) { return 1; }
>
> typedef double A;
>
> class B(T)
> {
> typedef int A;
> }
>
> class X(T) : B!(T)
> {
> A a; // a has type int
> int T; // ok, T redeclared as int
> int foo()
> { char T; // ok, T redeclared as char
> return g(1); // always returns 2
> }
> };

What if A specialization comes along? Does that affect whether "a"
has type int above? Or is it fixed for all time?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

James Kanze

unread,
Apr 25, 2006, 6:33:09 PM4/25/06
to
Walter Bright wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>> James Kanze wrote:
>>> wka...@yahoo.com wrote:
>>>> By creating pointer aliases, you can prevent the optimization
>>>> of binding certain variables to registers, but that doesn't
>>>> mean this type of optimization is impractical or undesirable.
>>>> Optimizations that are only possible if good or typical
>>>> programming practice is followed can still be useful.
>>> Providing you can handle the alternative cases as well. (I've
>>> seen this in optimizers : in one case I've seen, any violation
>>> of SESE caused the optimizer to give up completely. Presumably,
>>> the author of this compiler felt even stronger about SESE than I
>>> do:-).)
>> Let me guess: the compiler was distributed on 360 KB floppy disks.
>> Because today, such a compiler would be quite useless in wake of
>> exceptions - just as the SESE way of writing code itself.

> There's also no reason to even write an optimizer that way.
> Algorithms that don't require SESE, and don't compromise on
> results, have been around since at least the 1970's.

Certainly. Most of the early work with optimizers concerned
Fortran -- Fortran IV, in fact, which didn't have any of the
structures we are currently used to.

According to the authors of the compiler: all information
concerning the structure and the flow of the program was taken
from the structure of the code -- they simply skipped the flow
analysis part of optimization completely. I don't doubt that
that made the compiler easier to write, and probably a little
bit faster; their argument was that people who wrote goto's and
such deserved to pay for it.

I mentionned it because I found the fact somewhat "funny", in an
odd sort of way. And simply to point out that once you compile
all legal programs "correctly", you're free to do what you want
with them in terms of optimization. Formally, from a standards
point of view, of course; while I do prefer an SESE style of
programming, I consider it somewhat arrogant and pretentious to
impose such a style by means of the optimizer.

[...]


> Even so, the very first data flow analysis optimizing compiler
> I wrote, back in 1985 or so, did not rely on SESE (or even
> care about it).

I've seen a fair number of optimizing compilers, and even worked
on some, but I only know of one that did this. Typically, most
of the optimizers I've seen do the opposite: they throw out any
flow information embedded in the sources (like the fact that
while is a loop), and regenerate it from the much lower level
structures which come out of the intermediate language.

--
James Kanze kanze...@neuf.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter Bright

unread,
Apr 25, 2006, 6:41:29 PM4/25/06
to
David Abrahams wrote:
> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>> There is no technical reason why they cannot be.
>
> Of course there is. Did you see the example in my previous posting?
> It looks as though you completely ignored it. Here's yet another
> example (though you should really go back and look at the others,
> too):
>
> template <class T>
> struct base;
>
> template <class T>
> struct derived : base<T>
> {
> A a; // what type does a have?
> };
>
> template <class T>
> struct base<T*>
> {
> typedef int A;
> };
>
> template <class T, class U>
> struct base<T(U)>
> {
> typedef char* A;
> };

Are you asking what type A has according to the Standard, or what type A
should have based on what one would intuitively expect it to be? What
one would intuitively expect it to be is either 'int' or 'char*'
depending on what the value of T is. And there is no technical reason
why that cannot be how it works in C++, except that it was Standardized
to behave differently.

(I say "unintuitive" because everyone I've explained the C++ behavior to
who was not already familiar with it found it to be surprising.)


>> I had to actually *break* finding them in DMC++ so it would be
>> standard conformant.
> Then, IMO, you were missing something to begin with.

In earlier versions of DMC++, it worked like it works in D. I haven't
discovered anything I missed, other than being surprised that the
Standard mandated the behavior of not looking inside dependent base classes.


>> Furthermore, in D, the template lookup rules
>> actually work as one would expect. The above example code written in
>> D would be:
>>
>> int g(double d) { return 1; }
>>
>> typedef double A;
>>
>> class B(T)
>> {
>> typedef int A;
>> }
>>
>> class X(T) : B!(T)
>> {
>> A a; // a has type int
>> int T; // ok, T redeclared as int
>> int foo()
>> { char T; // ok, T redeclared as char
>> return g(1); // always returns 2
>> }
>> };
>
> What if A specialization comes along?

I assume you meant "What if a specialization of B happens later?" No
problem, since forward references to template definitions work just
fine. All they have to be is in scope.

> Does that affect whether "a" has type int above?

Yes.

> Or is it fixed for all time?

It's determined by the instantiated definition of B. For example, if we
append the following to the example:

class B(T:char) // specialization of template B with type char
{
typedef short A;
}

X!(char) x;

now x.a will be of type short.

I do know why the C++ Standard is as it is; it is to support parsing of
template bodies with no semantic knowledge of the values of the
parameters, and this makes things problematic because C++ can't be
parsed without semantic knowledge of identifiers. However, the Standard
(to the best of my language-lawyerly eyeballs) does not require such
preparsing. Parsing can be deferred until instantiation time. In my
opinion (obviously not widely shared!), the benefits of preparsing
template bodies is more than outweighed by the downside of the
inconsistent, unintuitive rules necessary to support it. But that's all
water long under the bridge, and C++ isn't going to change. It is what
it is.

But D can do it differently. Preparsing works in D template bodies
because the language is designed to be parseable without semantic
knowledge. The semantic analysis is done at instantiation time.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
Apr 25, 2006, 6:40:53 PM4/25/06
to
In-Reply-To: <RdednRiLpvF7YdbZ...@comcast.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Walter Bright wrote:
> There is no technical reason why they cannot be. I had to actually
> *break* finding them in DMC++ so it would be standard conformant.
> Furthermore, in D, the template lookup rules actually work as one would
> expect. The above example code written in D would be:
>
> int g(double d) { return 1; }
>
> typedef double A;
>
> class B(T)
> {
> typedef int A;
> }
>
> class X(T) : B!(T)
> {
> A a; // a has type int
> int T; // ok, T redeclared as int
> int foo()
> { char T; // ok, T redeclared as char
> return g(1); // always returns 2
> }
> };
>
> int g(int i) { return 2; } // functions can be forward referenced

How about this: I have a module foo defining

int g(int i) { return 2; }

and then I write the following:

=======================================


int g(double d) { return 1; }

typedef double A;

class B(T)
{
typedef int A;
}

class X(T) : B!(T)
{
A a; // a has type int
int T; // ok, T redeclared as int
int foo()
{ char T; // ok, T redeclared as char
return g(1); // always returns 2
}
};

import foo;
========================================

What's gonna happen?

Even in the same module, the winner is not crystal-clear. There is
something (nice) to be said about conceptual parallel parsing (all
definitions in a module are entered simultaneously), but at the same
time, the simplicity of "a symbol is in vigor from its declaration point
onwards" is attractive too.


Andrei

Jerry Coffin

unread,
Apr 26, 2006, 5:42:52 PM4/26/06
to
In article
<gbidnZSZh6XDndHZ...@comcast.com>,
wal...@digitalmars-nospamm.com says...

[ ... ]

> There's also no reason to even write an optimizer that way. Algorithms
> that don't require SESE, and don't compromise on results, have been
> around since at least the 1970's.

Not anymore anyway -- though there were compilers written
before the 1970s. I'm sure I'm not the only one here to
remember a time when optimizers were often limited as
much by practicality as known algorithms. At one time it
was claimed that one of IBM's PL/I compilers was split up
into over 100 passes (since at that time a mainframe's
entire memory was smaller than the cache on a modern
CPU...) Never having seen the source code, I can't verify
that directly, but it was sure slow enough to make that
easy to believe.

[ ... ]

> It wasn't until 1992 or 93, though, that we were finally able to
> abandon
> distributing the compiler on floppies, when CD drives became
> ubiquitous
> enough. And just in time, too, juggling a dozen or more floppies was a
> big pain for everyone.
>
> Even so, the very first data flow analysis optimizing compiler I
> wrote,
> back in 1985 or so, did not rely on SESE (or even care about it).

If memory serves, however, you wrote your compiler based
largely on information in the Dragon Book -- by that
time, optimization was relatively well understood. 20+
years earlier, it was much more a black art. At one time,
limits in the compiler often reflected little more than
the limits in the understanding and/or imagination of the
author (and, as above, sometimes reflected nothing more
than what they could reasonably fit into memory).

--
Later,
Jerry.

The universe is a figment of its own imagination.

Andrei Alexandrescu (See Website For Email)

unread,
Apr 26, 2006, 5:55:22 PM4/26/06
to

My opinion is a bit different. To summarize it: it's best to do semantic
analysis (the more the better) when seeing the template body. It's bad
that C++ needs semantics just to be able to parse.

More details: It is best to be able to do semantic analysis when you see
the template definition, but not for C++'s reasons (getting the parse
right). Parsing indeed should be doable without much, or any, semantic
analysis. It's best to do "early" semantic analysis to encourage
separate compilation, separate analysis, and separate reasoning (both by
humans and by machines) about code. Perhaps the biggest leap in software
productivity was the introduction of entities that had the same
semantics regardless of the code they were used in. That has since
become so obvious, we tend to forget about it.

Ideally, it should be enough for a compiler to see a template definition
to figure out the following:

(1) What exact requirements a type is expected to meet to be proper for
instantiating the template;

(2) Which parts of the template impose each of those requirements.

Point 1 is necessary for separate compilation, point 2 is necessary for
meaningful error messages (that could also help the template author to
relax the requirements).

So if D defers all of the hard work to instantiation time, paint me
unimpressed. That's just C++ cleaned of the C-inherited ambiguities that
led to the "typename" ab(d)omination.


Andrei

Walter Bright

unread,
Apr 26, 2006, 6:12:52 PM4/26/06
to
James Kanze wrote:
> I've seen a fair number of optimizing compilers, and even worked
> on some, but I only know of one that did this. Typically, most
> of the optimizers I've seen do the opposite: they throw out any
> flow information embedded in the sources (like the fact that
> while is a loop), and regenerate it from the much lower level
> structures which come out of the intermediate language.

That's certainly how the Digital Mars optimizer works and always has
worked. But back in the 80's, I remember reading about other compilers
that *did* only optimize loops if they were specified as "for" loops,
and that quit if there were any goto's. The implementors were quite
proud of them, but my thought was that they ought to at least give a
cursory glance at a compiler book <g>.

Or maybe I was just fortunate. Back in '82, I took a summer course on
compilers at Stanford put on by Ullman and Hennessey (yes, those guys!).
It gave me a good solid foundation on how to do optimizers.

Walter Bright

unread,
Apr 27, 2006, 3:31:41 PM4/27/06
to
Andrei Alexandrescu (See Website For Email) wrote:
> Walter Bright wrote:
>> But D can do it differently. Preparsing works in D template bodies
>> because the language is designed to be parseable without semantic
>> knowledge. The semantic analysis is done at instantiation time.
> My opinion is a bit different. To summarize it: it's best to do
> semantic
> analysis (the more the better) when seeing the template body. It's bad
> that C++ needs semantics just to be able to parse.

Being able to parse the template bodies is not that big a deal in and of
itself, the important thing is to parse it at some point at compile
time. When in the compilation process it is done doesn't really matter.

The big deal, however, in being able to parse without a symbol table is
that one can write tools that can analyze source code without having to
build a full blown compiler. Being able to parse the code *correctly*
without needing to spend 10 years first building a compliant front end
opens up a big door to all kinds of tools that can be written that can
manipulate source code.

> More details: It is best to be able to do semantic analysis when
> you see
> the template definition, but not for C++'s reasons (getting the parse
> right). Parsing indeed should be doable without much, or any, semantic
> analysis. It's best to do "early" semantic analysis to encourage
> separate compilation, separate analysis, and separate reasoning
> (both by
> humans and by machines) about code. Perhaps the biggest leap in
> software
> productivity was the introduction of entities that had the same
> semantics regardless of the code they were used in. That has since
> become so obvious, we tend to forget about it.
>
> Ideally, it should be enough for a compiler to see a template
> definition
> to figure out the following:
>
> (1) What exact requirements a type is expected to meet to be proper
> for
> instantiating the template;
>
> (2) Which parts of the template impose each of those requirements.
>
> Point 1 is necessary for separate compilation, point 2 is necessary
> for
> meaningful error messages (that could also help the template author to
> relax the requirements).

I think the requirements a template parameter should meet (i.e.
"constraints") should be specified by the template author in the
interface to the template (analogously to how function parameter types
must be explicit). Having the compiler figure them out for you I'm not
sure of the utility of.

> So if D defers all of the hard work to instantiation time,

Ironically, in C++ parsing is the hard part. In D, parsing is the easy
part <g>.

> paint me unimpressed. That's just C++ cleaned of the C-inherited
> ambiguities that
> led to the "typename" ab(d)omination.

Fixing the name lookup rules is far from the only thing improved in D
templates. My aim is to make templates as easy to write as ordinary
classes or functions are - cleaning up the syntax and the rules is a
necessary step for that. Templates should be writable by mortals, not
just the gods <g>.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
Apr 28, 2006, 9:27:49 AM4/28/06
to
Walter Bright wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>
>>Walter Bright wrote:
>>
>>>But D can do it differently. Preparsing works in D template bodies
>>>because the language is designed to be parseable without semantic
>>>knowledge. The semantic analysis is done at instantiation time.
>>
>>My opinion is a bit different. To summarize it: it's best to do
>>semantic
>>analysis (the more the better) when seeing the template body. It's bad
>>that C++ needs semantics just to be able to parse.
>
>
> Being able to parse the template bodies is not that big a deal in and of
> itself, the important thing is to parse it at some point at compile
> time. When in the compilation process it is done doesn't really matter.
>
> The big deal, however, in being able to parse without a symbol table is
> that one can write tools that can analyze source code without having to
> build a full blown compiler. Being able to parse the code *correctly*
> without needing to spend 10 years first building a compliant front end
> opens up a big door to all kinds of tools that can be written that can
> manipulate source code.

This is a misunderstanding. I agree with everything you've said, but it
does not address anything of what I said.

>>Ideally, it should be enough for a compiler to see a template
>>definition
>>to figure out the following:
>>
>>(1) What exact requirements a type is expected to meet to be proper
>>for
>>instantiating the template;
>>
>>(2) Which parts of the template impose each of those requirements.
>>
>>Point 1 is necessary for separate compilation, point 2 is necessary
>>for
>>meaningful error messages (that could also help the template author to
>>relax the requirements).
>
>
> I think the requirements a template parameter should meet (i.e.
> "constraints") should be specified by the template author in the
> interface to the template (analogously to how function parameter types
> must be explicit). Having the compiler figure them out for you I'm not
> sure of the utility of.

So you don't need to invent syntax for the constraints, among other
things. Much like type deduction, if it's something that the compiler
can do better than the human, let the compiler do it. I admit I'm
ambivalent about that. Sometimes it's better to have the human sit down
and figure out the constraints in advance.

>>So if D defers all of the hard work to instantiation time,
>
>
> Ironically, in C++ parsing is the hard part. In D, parsing is the easy
> part <g>.
>
>
>>paint me unimpressed. That's just C++ cleaned of the C-inherited
>>ambiguities that
>>led to the "typename" ab(d)omination.
>
>
> Fixing the name lookup rules is far from the only thing improved in D
> templates. My aim is to make templates as easy to write as ordinary
> classes or functions are - cleaning up the syntax and the rules is a
> necessary step for that. Templates should be writable by mortals, not
> just the gods <g>.

That's all good, but hard to quantify such fuzzy aims and to measure
progress. I'm big on reining in and finding measures. One measure is
exactly what I said: to what extent can you compile a template
separately? D's answer seem to be: nothing beyond parsing. We parse the
thing and store the AST for later use. I believe that that's the wrong
answer, one that takes the language astray from the goal of making it
easy to write templates.


Andrei

Tom Widmer

unread,
Apr 29, 2006, 7:16:19 PM4/29/06
to

Incidentally, have you seen this paper on (amongst other things)
separate compilation of templates?

http://osl.iu.edu/~tveldhui/papers/2000/tmpw00/index.html

Tom

David Abrahams

unread,
Apr 29, 2006, 7:28:13 PM4/29/06
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:

> > Andrei Alexandrescu (See Website For Email) wrote:
>> >> Walter Bright wrote:
>>> >>> But D can do it differently. Preparsing works in D template bodies
>>> >>> because the language is designed to be parseable without semantic
>>> >>> knowledge. The semantic analysis is done at instantiation time.
>> >> My opinion is a bit different. To summarize it: it's best to do
>> >> semantic
>> >> analysis (the more the better) when seeing the template body.
It's bad
>> >> that C++ needs semantics just to be able to parse.
> >
> > Being able to parse the template bodies is not that big a deal in
and of
> > itself, the important thing is to parse it at some point at compile
> > time. When in the compilation process it is done doesn't really
> > matter.

It definitely matters whether it's before or after instantiation. If
you can parse the entire template body at definition time, *and* you
can do some semantic analysis, you can avoid those long instantiation
backtraces. See ConceptGCC.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter Bright

unread,
Apr 30, 2006, 1:05:46 PM4/30/06
to
David Abrahams wrote:
> It definitely matters whether it's before or after instantiation. If
> you can parse the entire template body at definition time, *and* you
> can do some semantic analysis, you can avoid those long instantiation
> backtraces. See ConceptGCC.

Seems to me that if an error in the template instantiation happens
during semantic analysis, the compiler could check to see if it is based
on the parameters or not, and if not, it doesn't need to put out backtraces.

And, of course, you could just ignore the backtrace part of the message
if it doesn't help.

I don't see why semantic analysis *has* to happen at definition time to
get better error messages.

Andrei Alexandrescu (See Website For Email)

unread,
May 1, 2006, 11:27:14 AM5/1/06
to
> Incidentally, have you seen this paper on (amongst other things)
> separate compilation of templates?
>
> http://osl.iu.edu/~tveldhui/papers/2000/tmpw00/index.html

I have; I've submitted a paper to the same workshop. Thanks for bringing
the pointer up. The paper underlines the difficulties I was talking
about: separate compilation of C++ templates is possible if you give up
on some typechecking. The paper implements, for example, dynamically
bound templates, thus allowing separate compilation, but gives in
exchange some semantic analysis - some instantiations will always fail
at runtime.

Andrei Alexandrescu (See Website For Email)

unread,
May 1, 2006, 11:26:13 AM5/1/06
to
Walter Bright wrote:
> David Abrahams wrote:
>
>> It definitely matters whether it's before or after instantiation. If
>> you can parse the entire template body at definition time, *and* you
>> can do some semantic analysis, you can avoid those long instantiation
>> backtraces. See ConceptGCC.
>
>
> Seems to me that if an error in the template instantiation happens
> during semantic analysis, the compiler could check to see if it is
> based
> on the parameters or not, and if not, it doesn't need to put out
> backtraces.
>
> And, of course, you could just ignore the backtrace part of the
> message
> if it doesn't help.
>
> I don't see why semantic analysis *has* to happen at definition
> time to
> get better error messages.

It's not only about better error messages. It's about everything that
separate compilation is about: separate compilation. You compile a
module, and once it passed parsing and all semantic analysis, you have a
high level of confidence that that module fulfills certain expectations,
and that it has certain expectations from the rest of the world.


Andrei

Walter Bright

unread,
May 2, 2006, 6:37:34 AM5/2/06
to
Andrei Alexandrescu (See Website For Email) wrote:
> Walter Bright wrote:
>> David Abrahams wrote:
>>
>>> It definitely matters whether it's before or after instantiation. If
>>> you can parse the entire template body at definition time, *and* you
>>> can do some semantic analysis, you can avoid those long instantiation
>>> backtraces. See ConceptGCC.
>>
>> Seems to me that if an error in the template instantiation happens
>> during semantic analysis, the compiler could check to see if it is
>> based
>> on the parameters or not, and if not, it doesn't need to put out
>> backtraces.
>>
>> And, of course, you could just ignore the backtrace part of the
>> message
>> if it doesn't help.
>>
>> I don't see why semantic analysis *has* to happen at definition
>> time to
>> get better error messages.
>
> It's not only about better error messages. It's about everything that
> separate compilation is about: separate compilation. You compile a
> module, and once it passed parsing and all semantic analysis, you have a
> high level of confidence that that module fulfills certain expectations,
> and that it has certain expectations from the rest of the world.

There's just no way for templates to pass all semantic analysis without
knowing what its arguments are. If you could complete the semantic
analysis, you'd have a function, not a template.

What I see as the big advantage of separate compilation is the coding
hygene issue - the template definition is only affected by its arguments
and the symbols in its definition context, rather than other symbols in
the instantiation context. This enables things like better
modularization, implementation hiding, better error messages, and yes,
faster compilation!

-Walter Bright


www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
May 2, 2006, 3:38:34 PM5/2/06
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:

> There's just no way for templates to pass all semantic analysis
> without
> knowing what its arguments are.

Of course there is. Have you looked at ConceptGCC?
http://www.osl.iu.edu/~dgregor/ConceptGCC/

> If you could complete the semantic analysis, you'd have a function,
> not a template.

Why do you say that?

> What I see as the big advantage of separate compilation is the
> coding hygene issue - the template definition is only affected by
> its arguments and the symbols in its definition context, rather than
> other symbols in the instantiation context. This enables things like
> better modularization, implementation hiding, better error messages,
> and yes, faster compilation!

It doesn't help all that much with modularization and error messages
if semantic correctness depends on the instantiation context. See
export. It helps maybe a little in those areas, but to really get
there you need concept support.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
May 2, 2006, 3:56:54 PM5/2/06
to
Walter Bright wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>>It's not only about better error messages. It's about everything that
>>separate compilation is about: separate compilation. You compile a
>>module, and once it passed parsing and all semantic analysis, you have a
>>high level of confidence that that module fulfills certain expectations,
>>and that it has certain expectations from the rest of the world.
>
>
> There's just no way for templates to pass all semantic analysis without
> knowing what its arguments are. If you could complete the semantic

> analysis, you'd have a function, not a template.

I believe otherwise. A template can pass all semantic analysis once it
establishes firmly what restrictions it imposes on all types it could be
instantiated with. A template saved as an agnostic AST does not have
that property.

> What I see as the big advantage of separate compilation is the coding
> hygene issue - the template definition is only affected by its arguments
> and the symbols in its definition context, rather than other symbols in
> the instantiation context. This enables things like better
> modularization, implementation hiding, better error messages, and yes,
> faster compilation!

Hygiene is actually but a prerequisite of proper semantic analysis :o).
We should not stop there.

Walter Bright

unread,
May 3, 2006, 5:28:45 AM5/3/06
to
David Abrahams wrote:
> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>
>> There's just no way for templates to pass all semantic analysis
>> without
>> knowing what its arguments are.
>
> Of course there is. Have you looked at ConceptGCC?
> http://www.osl.iu.edu/~dgregor/ConceptGCC/

No, thanks for the pointer.

>> If you could complete the semantic analysis, you'd have a function,
>> not a template.
> Why do you say that?

Because with complete semantic analysis, you're ready to generate code.
If you can't generate code yet, semantic analysis is incomplete. I
suspect, though, you're thinking of a different definition of semantic
analysis, i.e. something along the lines of "no further error messages
will happen."

>> What I see as the big advantage of separate compilation is the
>> coding hygene issue - the template definition is only affected by
>> its arguments and the symbols in its definition context, rather than
>> other symbols in the instantiation context. This enables things like
>> better modularization, implementation hiding, better error messages,
>> and yes, faster compilation!
>
> It doesn't help all that much with modularization and error messages
> if semantic correctness depends on the instantiation context. See
> export. It helps maybe a little in those areas, but to really get
> there you need concept support.

I don't agree. For example, modularization depends on coding hygiene,
which has nothing to do with concepts.

David Abrahams

unread,
May 4, 2006, 5:57:46 AM5/4/06
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:

>> It doesn't help all that much with modularization and error messages
>> if semantic correctness depends on the instantiation context. See
>> export. It helps maybe a little in those areas, but to really get
>> there you need concept support.
>
> I don't agree. For example, modularization depends on coding hygiene,
> which has nothing to do with concepts.

Are you sure? Maybe I don't understand what you mean by "hygiene."

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
May 4, 2006, 6:01:06 PM5/4/06
to
David Abrahams wrote:
> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>
>
>>>It doesn't help all that much with modularization and error messages
>>>if semantic correctness depends on the instantiation context. See
>>>export. It helps maybe a little in those areas, but to really get
>>>there you need concept support.
>>
>>I don't agree. For example, modularization depends on coding hygiene,
>>which has nothing to do with concepts.
>
>
> Are you sure? Maybe I don't understand what you mean by "hygiene."

"Hygiene" is a term with a rather precise meaning. We can refer to
http://en.wikipedia.org/wiki/Hygienic_macro throughout this discussion.

Andrei

Walter Bright

unread,
May 5, 2006, 4:43:16 AM5/5/06
to
David Abrahams wrote:
> Walter Bright <wal...@digitalmars-nospamm.com> writes:
>
>>> It doesn't help all that much with modularization and error messages
>>> if semantic correctness depends on the instantiation context. See
>>> export. It helps maybe a little in those areas, but to really get
>>> there you need concept support.
>> I don't agree. For example, modularization depends on coding hygiene,
>> which has nothing to do with concepts.
>
> Are you sure? Maybe I don't understand what you mean by "hygiene."

What I mean by it is having non-dependent names not unexpectedly being
looked up in the instantiation context. The meaning of non-dependent
names should also not be affected by the order of #include's, or the
addition of other #include's with arbitrary declarations in them.

David Abrahams

unread,
May 5, 2006, 4:56:13 AM5/5/06
to
"Andrei Alexandrescu (See Website For Email)"
<SeeWebsit...@erdani.org> writes:

> "Hygiene" is a term with a rather precise meaning.

I know, ...

> We can refer to
> http://en.wikipedia.org/wiki/Hygienic_macro throughout this discussion.

....although that page doesn't do much to illuminate its meaning in the
context of C++ templates, since templates aren't macros.

Anyway, IIUC, instantiation context is still an issue with exported
templates, and can affect overload resolution. That's about as close
a mapping as I can devise for the idea of "unhygienic macro" in the
context of templates.

On the other hand, concept-enabled templates can be immune to such
lookup issues, because all operations those templates can use are
looked up through the concept itself. It's almost exactly like the
concept is an abstract base class for which the model declarations
provide concrete implementations... except, of course, there's no type
erasure, so you can still do algorithm specialization.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Gabriel Dos Reis

unread,
May 18, 2006, 5:22:24 PM5/18/06
to
vande...@gmail.com writes:

[...]

| However, I did not see an easy way to gracefully recover from errors
| (i.e., without generating completely meaningless internal structures
| that would eventually lead to a crash) without doing the ODR checks.
| It might be possible to avoid that with some alternative C++ front end
| designs, but my guess is that such designs will likely pay a
| significant
| performance cost. I might be wrong (e.g. blinded by the traditional
| intermediate language approaches).

I highly suspect that ODR checks you implemented is the right thing to
do, not just for the sake of your compiler's internal data structures,
but the sake of the program being translated. I highly value that,
and I believe it is the minimum standard other compilers considering
export (e.g. GCC) should meet. Thanks for having set the mark.

C does not formally have the notion of ODR, but I've heard someone
(working on C++ since its conception and who have contributed to C)
that its inventors did intend ODR. They just did not have time to get
to it -- and in the meantime relied on external tools like lint.

--
Gabriel Dos Reis
g...@integrable-solutions.net

0 new messages