Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

n1496

15 views
Skip to first unread message

llewelly

unread,
May 17, 2004, 5:09:47 PM5/17/04
to

I'm unhappy with the bit in in n1496 about syntax. Specifically:

# There are three possible approaches to specifying which names in
# a translation unit refer to entities with shared linkage. First,
# names with external linkage can be non-shared by default, and the
# programmer would have to explicitly identify names that are to
# have shared linkage. This is the model that Windows programmers
# are familiar with. Second, names with external linkage can be
# shared by default, and the programmer would have to explicitly
# identify names that are not to have shared linkage. This is
# similar to the model that UNIX programmers are familiar
# with. Third, it can be implementation-defined which of the two
# preceeding models applies. This minimizes the required changes to
# existing code.

I'm unhappy with the first model because it seems to break all
existing unix code that makes use of shared libraries.

I'm unhappy with the second model becuase it changes the default on
windows; it seems to cause every declaration to undergo a silent
change in meaning.

I don't think either of the first two models should be accepted.

I think I want all of:

(a) A syntax for declaring a name to have shared linkage.

(b) A syntax for declaring a name to have non-shared linkage.

(c) If a name's declaration does not explicitly specify shared
or non-shared linkage, whether that name's linkage is shared
or non-shared shall be implementation-defined.

The first time I read the third alternative, it seemed to mean
this. On re-reading it, I'm unsure. Specifically, if an
implementation supports shared libraries, I want both (a) and
(b), regardless of the implementation-defined meaning of (c).


---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]

Pete Becker

unread,
May 17, 2004, 9:14:25 PM5/17/04
to
llewelly wrote:
>
> I'm unhappy with the bit in in n1496 about syntax. Specifically:
>
> # There are three possible approaches to specifying which names in
> # a translation unit refer to entities with shared linkage. First,
> # names with external linkage can be non-shared by default, and the
> # programmer would have to explicitly identify names that are to
> # have shared linkage. This is the model that Windows programmers
> # are familiar with. Second, names with external linkage can be
> # shared by default, and the programmer would have to explicitly
> # identify names that are not to have shared linkage. This is
> # similar to the model that UNIX programmers are familiar
> # with. Third, it can be implementation-defined which of the two
> # preceeding models applies. This minimizes the required changes to
> # existing code.
>
> I'm unhappy with the first model because it seems to break all
> existing unix code that makes use of shared libraries.

It doesn't break anything. It means that such code wouldn't be portable.
It's not portable today, so that's no change.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

llewelly

unread,
May 19, 2004, 6:10:09 AM5/19/04
to
peteb...@acm.org (Pete Becker) writes:

> llewelly wrote:
>>
>> I'm unhappy with the bit in in n1496 about syntax. Specifically:
>>
>> # There are three possible approaches to specifying which names in
>> # a translation unit refer to entities with shared linkage. First,
>> # names with external linkage can be non-shared by default, and the
>> # programmer would have to explicitly identify names that are to
>> # have shared linkage. This is the model that Windows programmers
>> # are familiar with. Second, names with external linkage can be
>> # shared by default, and the programmer would have to explicitly
>> # identify names that are not to have shared linkage. This is
>> # similar to the model that UNIX programmers are familiar
>> # with. Third, it can be implementation-defined which of the two
>> # preceeding models applies. This minimizes the required changes to
>> # existing code.
>>
>> I'm unhappy with the first model because it seems to break all
>> existing unix code that makes use of shared libraries.
>
> It doesn't break anything. It means that such code wouldn't be portable.
> It's not portable today, so that's no change.

I am sorry, but these two statements make no sense to me. Could you
please rephrase?

Pete Becker

unread,
May 19, 2004, 10:51:09 PM5/19/04
to
llewelly wrote:
>
> peteb...@acm.org (Pete Becker) writes:
>
> > llewelly wrote:
> >>
> >> I'm unhappy with the bit in in n1496 about syntax. Specifically:
> >>
> >> # There are three possible approaches to specifying which names in
> >> # a translation unit refer to entities with shared linkage. First,
> >> # names with external linkage can be non-shared by default, and the
> >> # programmer would have to explicitly identify names that are to
> >> # have shared linkage. This is the model that Windows programmers
> >> # are familiar with. Second, names with external linkage can be
> >> # shared by default, and the programmer would have to explicitly
> >> # identify names that are not to have shared linkage. This is
> >> # similar to the model that UNIX programmers are familiar
> >> # with. Third, it can be implementation-defined which of the two
> >> # preceeding models applies. This minimizes the required changes to
> >> # existing code.
> >>
> >> I'm unhappy with the first model because it seems to break all
> >> existing unix code that makes use of shared libraries.
> >
> > It doesn't break anything. It means that such code wouldn't be portable.
> > It's not portable today, so that's no change.
>
> I am sorry, but these two statements make no sense to me. Could you
> please rephrase?
>

Sure. Today, Unix code that makes use of shared libraries isn't marked
up with import/export specifiers. If the standard adds some way of
saying that various functions will live in a shared library, then code
that's marked up in that way will (eventually) be portable, say between
Unix and Windows. Unix code that isn't marked up won't take advantage of
this added syntax, so it won't be portable to Windows. However, Unix
compilers will certainly still support the current style (through a
command line option), and the code will still work in all the places
where it works now.

For a less abstract example, quite a few programmers still use classic
iostreams through the header <iostream.h>, even though the standard
provides something similar but different through <iostream>.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---

Niall Douglas

unread,
May 19, 2004, 10:52:54 PM5/19/04
to
On Mon, 17 May 2004 21:09:47 +0000 (UTC), llewelly
<llewe...@xmission.dot.com> wrote:

> # There are three possible approaches to specifying which names in
> # a translation unit refer to entities with shared linkage. First,
> # names with external linkage can be non-shared by default, and the
> # programmer would have to explicitly identify names that are to
> # have shared linkage. This is the model that Windows programmers
> # are familiar with. Second, names with external linkage can be
> # shared by default, and the programmer would have to explicitly
> # identify names that are not to have shared linkage. This is
> # similar to the model that UNIX programmers are familiar
> # with. Third, it can be implementation-defined which of the two
> # preceeding models applies. This minimizes the required changes to
> # existing code.
>
> I'm unhappy with the first model because it seems to break all
> existing unix code that makes use of shared libraries.

One could take the view that Unix has it wrong and it needs to change for
its own good. Irrespective of what the standard says, the implementors on
Unix will devise some transitory/interoperability strategy. It would also
hardly be the first time that existing implementations did something which
was later ruled out by the standard (I'm thinking unqualified name lookups
within templates particularly).

BTW in N1496 appendix item 4 I can confirm that the Unix mechanism cannot
be implemented on Windows. The PE binary hardcodes what symbols it looks
for from each library by name as of course under PE you can use ordinal
rather than mangled symbol resolution. The advantage of the PE system is a
substantially reduced search tree and thus improved load & link times but
it does mean you can't arbitrarily muck around with DLL contents without a
relink (not hard as ELF dynamic loaders ARE the full-strength standard
linker).

> I'm unhappy with the second model becuase it changes the default on
> windows; it seems to cause every declaration to undergo a silent
> change in meaning.

N1496 also doesn't cover the need to use dllimport with PE for variables.
You can get away with not specifying dllimport for functions on Win32 but
you *must* specify it for variables.

> I don't think either of the first two models should be accepted.

I think the first model should be mandated. Unix shared libraries emulate
static linking except that the linking is continuous in that it can be
more-done or be undone according to manual loading and unloading of
libraries. Certainly from a programmer's point of view, you are supposed
to do nothing special to use shared libraries - just tweak your build
system slightly with no source changes. However the reality is more
complex than that.

Because Unix shared libraries seek to provide this legacy appearance, they
diverge much less from not only the standard but also the spirit of static
linking. Therefore one could view that system as "enhanced static linking"
and therefore not something which needs to be borne in mind when updating
the ISO C++ standard.

In other words, what I'm saying is that a MSVC style shared library system
can coexist with the Unix shared library system whereas the opposite is
most certainly not true. Therefore be brave and go with system number 1.

Cheers,
Niall

Pete Becker

unread,
May 20, 2004, 2:44:54 PM5/20/04
to
Niall Douglas wrote:
>
> You can get away with not specifying dllimport for functions on Win32 but
> you *must* specify it for variables.

No. The compiler has to know to generate an indirect reference. That
doesn't depend on specifying dllimport, but simply on knowing that the
variable is imported. The marker that indicates that it's exported is
sufficient: the compiler can generate indirect references whenver it
accesses exported data (although this would mean that accesses from the
same linkage unit would be indirect as well).

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

---

Davide Bolcioni

unread,
May 22, 2004, 8:18:04 PM5/22/04
to
Niall Douglas wrote:

>> I'm unhappy with the first model because it seems to break all
>> existing unix code that makes use of shared libraries.

> One could take the view that Unix has it wrong and it needs to change
> for its own good. Irrespective of what the standard says, the
> implementors on Unix will devise some transitory/interoperability
> strategy.

From the point of view of a programmer doing mostly maintenance work of
existing code bases, I would say that Unix had it right from the
beginning and Windows requires the programmer to pepper the code with
too much detail for his own good: the notion that shared libraries are
not something which should concern the language, only the linker, made
my job exceedingly easier. If you want metaphors, think relational
versus handcrafted database, with the linker doing the joins.

> BTW in N1496 appendix item 4 I can confirm that the Unix mechanism
> cannot be implemented on Windows.

Given the time scale expected of C++0x, maybe the right question is
whether it is implementable for .NET ?

> The PE binary hardcodes what symbols
> it looks for from each library by name as of course under PE you can use
> ordinal rather than mangled symbol resolution. The advantage of the PE
> system is a substantially reduced search tree and thus improved load &
> link times but it does mean you can't arbitrarily muck around with DLL
> contents without a relink (not hard as ELF dynamic loaders ARE the
> full-strength standard linker).

This should be of interest only to people implementing a linker, in my
opinion.

>> I'm unhappy with the second model becuase it changes the default on
>> windows; it seems to cause every declaration to undergo a silent
>> change in meaning.

From the point of view of maintenance ... I would personally dislike that
intensely :-).

> N1496 also doesn't cover the need to use dllimport with PE for
> variables. You can get away with not specifying dllimport for functions
> on Win32 but you *must* specify it for variables.

Which is an implementation detail which I expect the linker to handle,
although the PE linker doesn't.

> I think the first model should be mandated. Unix shared libraries
> emulate static linking except that the linking is continuous in that it
> can be more-done or be undone according to manual loading and unloading
> of libraries.

Unix shared libraries implement visibility as defined in the C or C++
language, without introducing another layer of detail for the programmer
to handle (and possibly get wrong). Programmers wishing for the
additional complexity might have the option to specify it, but the
default should be implementation-defined (in the makefile) linking.

In other words: we don't tell the programmer how the program will be
cobbled together, he had better make sure it works anyway. Guidelines as
to which tricks not to pull in order to not have things break
unexpectedly would be really welcome and in my humble opinion belong in
a language standard: FORTRAN got much mileage from its rules for common
variable access, for example.

> In other words, what I'm saying is that a MSVC style shared library
> system can coexist with the Unix shared library system whereas the
> opposite is most certainly not true. Therefore be brave and go with
> system number 1.

On the contrary, I suggest leaving the language as it stands; just
acknowledge that shared libraries are a reality of current
implementations and might impose a series of restrictions which should
be observed by the programmer.

Best Regards,
Davide Bolcioni

Niall Douglas

unread,
May 24, 2004, 4:29:39 PM5/24/04
to
On Sun, 23 May 2004 00:18:04 +0000 (UTC), Davide Bolcioni
<6805b...@sneakemail.com> wrote:

>> One could take the view that Unix has it wrong and it needs to change
>> for its own good. Irrespective of what the standard says, the
>> implementors on Unix will devise some transitory/interoperability
>> strategy.
>
> From the point of view of a programmer doing mostly maintenance work of
> existing code bases, I would say that Unix had it right from the
> beginning and Windows requires the programmer to pepper the code with
> too much detail for his own good: the notion that shared libraries are
> not something which should concern the language, only the linker, made
> my job exceedingly easier.

Sure, the Unix method makes your job easier to produce something which
functions. However as anyone who has ever compared real-world application
performance between Windows and Unix realises, Unix GUI applications just
go slower - and I was even comparing two applications both using Qt.

If you go the Unix route, code *quality* suffers because it's much easier
for the programmer to be lazy. If you go the Windows route, you get as
perfect quality as you can for free with slightly more work. You already
know which I prefer but I'll elaborate below.

>> BTW in N1496 appendix item 4 I can confirm that the Unix mechanism
>> cannot be implemented on Windows.
>
> Given the time scale expected of C++0x, maybe the right question is
> whether it is implementable for .NET ?

I personally feel that when producing language standards as with any
standard, the format should be big release, then point 1 release and point
2 release with point 2 released as the next big release is in preparation.
ISO C++ has not followed this, much to its detriment :(

Now we have most major compilers nearly complying with the standard (I
view "export" as not particularly useful/important if I understand the
standard correctly), we really could do with a point release of the
standard to address urgent issues in a timely fashion. My top five most
annoying features of C++ are:

1. Lack of move constructors. It really, *really* annoys me to see the
generated assembler for using say a string class and see loads of
pointless copying. With move constructors the compiler can optimise the
case when the object being copied is obviously about to get destroyed.
Also, around 5% of the classes I design operate with move semantics which
is a royal PITA when your copy constructor takes const references (either
you const_cast or use mutable).

2. Lack of local functions. I use a lot of "undoable operations" to roll
back partially completed work in case of an exception. I got the idea
originally from Alexandrescu but I've improved on his original work and
you can find them at
http://tnfox.sourceforge.net/TnFOX/html/group__rollbacks.html. Sometimes
the rollback can be quite complex and here you need local functions -
unfortunately these are not permitted, so instead you declare a local
class with a static member function but this furls up the source :(. If
local classes with static member functions are possible, so are local
functions! I'd particularly love if local functions had access to the
scope in which they are defined.

3. Removal of stupid and arbitrary limits. There are lots in here but
thankfully most are obvious from reading through the standard. Off the top
of my mind, why can't you define operator new & delete within a namespace
(why aren't they just treated like normal functions with an irregular call
syntax)? Why can't you template a typedef? Why doesn't the standard
mandate marking in the mangled symbol of non-throwing functions so the
compiler can warn at compile time when a function shouldn't be
non-throwing or indeed when it could (it also helps the optimiser here)?
Why can't you initialise aggregate types with compile-time known functions
(I'm thinking compile-time variable arrays here)? Why can't you mark
virtual functions as being "once" in all subclasses (ie; if there are
virtual functions A::foo() and B::foo and class C inherits A and B in that
order, A's foo() points at B's foo() rather than there being two
(ambiguous) foo()'s) - such a feature would greatly ease mixing run-time &
compile-time polymorphism without introducing the costs of virtual base
class inheritance? Why can't templates have parameterised pieces of code
so we can finally get rid of macros? And why is the only way to determine
an unknown type is via template argument deduction via a function which
severely castrates your metaprogramming flexibility (and forces stilted
design)? I can go on, but the "D" programming language has many more good
ideas here.

4. Lack of basic tools in the STL (eg; a hash based dictionary container,
reference counting abstract base classes (preferably based on
policy-driven smart pointers), LRU caching etc. etc). This stuff is hardly
new technology - I threw together a generic hash based dictionary
container in a few hours using map, vector and list. It's really not hard.
And no, I don't like hash_map from the SGI library as it's not
configurable enough.

5. Speaking of which, the STL in general annoys me. I deliberately don't
use large sections of it and I'm hardly alone here - a large minority of
C++ programmers refuse to use the STL at all. The thing which annoys me
the most is the naming scheme - it consistently names the functions the
least likely thing you'd think of and I personally can't see why lots more
synonyms couldn't be added (eg; list<>::splice() could also be called
list<>::relink() or craziest of all, list<>::move()). It's this style
issue and it being totally contrary to how I write my own C++ which makes
me always uncomfortable using it, which is why I wrapped it with a thin
veneer to provide Qt API emulations (see
http://tnfox.sourceforge.net/TnFOX/html/group___q_t_l.html).

The language itself is generally good, though it has an unfortunate habit
of requiring more keyboard typing to create better C++ which means
laziness automatically causes bad C++. However its weakest point IMHO is
the STL which if I had absolute control over the ISO standardisation
process I'd personally start most of it again from scratch and supply thin
veneers emulating the existing STL's API. My new STL would be far more
policy driven as the existing lack of same prevents usage of STL-provided
functionality (eg; searching a list when the predicate isn't fixed nor
simple).

(Note that I am aware that some of my top five most annoying things about
C++ either are being addressed or have been addressed in the next
standard. What's wrong is that none of the above five items require much
work - they are simple - and could be safely added to the standard within
months rather than years. They would also require trivial quantities of
additional work by compiler vendors. The fact they are not being applied
with haste means millions of more lines of code must be written to
circumvent them, certainly hundreds of thousands of more programmer hours
must be wasted and I think this is very irresponsible).

>> The PE binary hardcodes what symbols it looks for from each library by
>> name as of course under PE you can use ordinal rather than mangled
>> symbol resolution. The advantage of the PE system is a substantially
>> reduced search tree and thus improved load & link times but it does
>> mean you can't arbitrarily muck around with DLL contents without a
>> relink (not hard as ELF dynamic loaders ARE the full-strength standard
>> linker).
>
> This should be of interest only to people implementing a linker, in my
> opinion.

Though I cannot speak for them, I cannot see Microsoft up-ending the
Windows DLL mechanism just for ISO C++ compliance. DLL's are absolute core
to Windows because unlike Unix it was designed from the beginning with
them in mind. If the ISO C++ spec adopts a mechanism incompatible with
Win32, then that feature will likely be ignored on the vast majority of
the world's computers.

> In other words: we don't tell the programmer how the program will be
> cobbled together, he had better make sure it works anyway. Guidelines as
> to which tricks not to pull in order to not have things break
> unexpectedly would be really welcome and in my humble opinion belong in
> a language standard: FORTRAN got much mileage from its rules for common
> variable access, for example.

The trouble with this approach is that it produces sub-optimal code. If
the compiler cannot know if a symbol is local to this DSO/DLL or not, it
must assume the worst and run *every* *single* *access* through an
indirection lookup. This is why code compiled with GCC using -fPIC is
markedly slower than code compiled without and it's also more bloated.

This is partially where the speed & size differential between MSVC & GCC
compiled binaries comes from. The trouble with "default shared" is that it
encourages laziness - most programmers will do the minimum to get a
working application and then stop. If it's "default private" then
programmers must add the extra annotation to even reach a working binary -
which is good, because code quality improves substantially.

I also feel that whether something is visible outside its DSO/DLL is part
of its API spec and thus interface contract. If you disagree with this,
consider how public/protected/private relate to class design and note that
similar logic applies to both.

> On the contrary, I suggest leaving the language as it stands; just
> acknowledge that shared libraries are a reality of current
> implementations and might impose a series of restrictions which should
> be observed by the programmer.

The "stick head in sand" approach is only useful when a technology is
brand new and the consequences of misapplying it may be dangerous. Shared
libraries have been in widespread use for fifteen years - we know them
well enough now that not acting just makes things worse.

Cheers,
Niall

llewelly

unread,
May 24, 2004, 11:56:06 PM5/24/04
to
Niall Douglas <s_googl...@nedprod.com> writes:

> On Sun, 23 May 2004 00:18:04 +0000 (UTC), Davide Bolcioni
> <6805b...@sneakemail.com> wrote:
>
>>> One could take the view that Unix has it wrong and it needs to
>>> change for its own good. Irrespective of what the standard says,
>>> the implementors on Unix will devise some
>>> transitory/interoperability strategy.
>>
>> From the point of view of a programmer doing mostly maintenance work of
>> existing code bases, I would say that Unix had it right from the
>> beginning and Windows requires the programmer to pepper the code with
>> too much detail for his own good: the notion that shared libraries are
>> not something which should concern the language, only the linker, made
>> my job exceedingly easier.
>
> Sure, the Unix method makes your job easier to produce something which
> functions. However as anyone who has ever compared real-world
> application performance between Windows and Unix realises, Unix GUI
> applications just go slower - and I was even comparing two
> applications both using Qt.

In my experience, this has far more to do with the the multiple (and
complicated!) layers of abstraction which come with the X window
system. Note that it is generally true for staticly linked
executables as well as dynamicly linked. Also, the opposite is
generally true for non-gui applications, due to more aggressive fs
caching. I'm not saying the unecessarily visible names which
often result from the unix model (note, not always, they can be
made not visible) and GOT lookups and such have no effect on
performance; (they are a performance problem) but for most apps,
I/O overwhelms most other factors.

> If you go the Unix route, code *quality* suffers because it's much
> easier for the programmer to be lazy. If you go the Windows route, you
> get as perfect quality as you can for free with slightly more
> work. You already know which I prefer but I'll elaborate below.
>
>>> BTW in N1496 appendix item 4 I can confirm that the Unix mechanism
>>> cannot be implemented on Windows.
>>
>> Given the time scale expected of C++0x, maybe the right question is
>> whether it is implementable for .NET ?
>
> I personally feel that when producing language standards as with any
> standard, the format should be big release, then point 1 release and
> point 2 release with point 2 released as the next big release is in
> preparation. ISO C++ has not followed this, much to its detriment :(

What do you think TC1 is, if not the better-labeled equivalent of a
point release?

>
> Now we have most major compilers nearly complying with the standard (I
> view "export" as not particularly useful/important if I understand the
> standard correctly),

It seems unimportant only becuase so few are using it. :-)

Whether or not it is useful is unknown (to most C++ users),
again, because so few are using it.

> we really could do with a point release of the
> standard to address urgent issues in a timely fashion.

In a timely fashion? They meet twice a year. Should we be funding
them to the extent that they can work on it 40 hours/week, 50
weeks a year? That would radicly change the amount of time it
took for improvents to be written into the standard, but would it
affect how quickly implementors can move to implement what is
written. OTOH, we could then have the easiest-to-understand
standard around (modulo the sheer complexity of C++.)

> My top five
> most annoying features of C++ are:
>
> 1. Lack of move constructors. It really, *really* annoys me to see the
> generated assembler for using say a string class and see loads of
> pointless copying. With move constructors the compiler can optimise
> the case when the object being copied is obviously about to get
> destroyed. Also, around 5% of the classes I design operate with
> move semantics which is a royal PITA when your copy constructor
> takes const references (either you const_cast or use mutable).
>
> 2. Lack of local functions. I use a lot of "undoable operations" to
> roll back partially completed work in case of an exception. I got
> the idea originally from Alexandrescu but I've improved on his
> original work and you can find them at
> http://tnfox.sourceforge.net/TnFOX/html/group__rollbacks.html. Sometimes
> the rollback can be quite complex and here you need local functions

IMO, 1. and 2. are serious changes. Do you think serious changes belong in
point releases? I don't. Note, I'm in favor of move constructors,
and possibly local functions (depending on semantics) as well.

> unfortunately these are not permitted, so instead you declare a local
> class with a static member function but this furls up the source
> :(. If local classes with static member functions are possible, so are
> local functions! I'd particularly love if local functions had access
> to the scope in which they are defined.
>
> 3. Removal of stupid and arbitrary limits. There are lots in here but
> thankfully most are obvious from reading through the standard. Off
> the top of my mind, why can't you define operator new & delete
> within a namespace (why aren't they just treated like normal
> functions with an irregular call syntax)?

A delete expression may choose the deallocation function (that's the
user-definable operator delete) dynamicly, based on the runtime
type of the deleted object. So a user-defined operator delete
within a namespace would naturally apply to all *types* declared
in said namespace, *not* to all delete-expressions in said
namespace. A desireable feature, IMO, but it potentially comes
with some difficult design work, and, most importantly, design
work that can't assume a delete-expression is just 'irregular
call syntax' - the semantics for what function is called
are also different, and a delete-expression does more than just
call the deallocation function anyway.

> Why can't you template a
> typedef?

People in the commitee are doing good hard work to provide this to
you.

> Why doesn't the standard mandate marking in the mangled
> symbol of non-throwing functions so the compiler can warn at
> compile time when a function shouldn't be non-throwing or indeed
> when it could (it also helps the optimiser here)?

*shrug* C++ exception specifications seem broken beyond all repair,
and in any case, it seems most all C++ programmers who use
exceptions are focused on using them for non-local error
handling. Why complicate their lives with a feature designed for
local error handling? If C++ is to gain a feature to force
specified errors to be handled locally, it shouldn't have
anything to do with exceptions.

Rather, if the committee is going to spend any time on exception
specifications, they should consider deprecating them. Probably,
this either isn't feasible, or isn't worth the effort, so the
committee's time IMO seems better spent on other things.

If the committee is concerned with making exceptions *safer*, I would
aprreciate it if they would ask if the programmer can readily provide
enough information so that the implementation can diagnose simple
but common violations of exception guarantees.

> Why can't you
> initialise aggregate types with compile-time known functions (I'm
> thinking compile-time variable arrays here)?

What do you mean by this? Surely your compiler supports dynamic
initialization

> Why can't you mark
> virtual functions as being "once" in all subclasses (ie; if there
> are virtual functions A::foo() and B::foo and class C inherits A
> and B in that order, A's foo() points at B's foo() rather than
> there being two (ambiguous) foo()'s) - such a feature would greatly
> ease mixing run-time & compile-time polymorphism without
> introducing the costs of virtual base class inheritance? Why can't
> templates have parameterised pieces of code so we can finally get
> rid of macros? And why is the only way to determine an unknown type
> is via template argument deduction via a function which severely
> castrates your metaprogramming flexibility

Because C++ wasn't designed with metaprogramming in mind. It was an
accident. Hopefully it will inspire a set of dedicated
metaprogramming features, either in C++ or some other
programming language, but for now, metaprogramming in C++ is like
OOP in assembler.

> (and forces stilted
> design)? I can go on, but the "D" programming language has many
> more good ideas here.

Some good ideas, and some bad ones. The only big advantage seems to
be that D (like darn near every other programming language(*)) is
much, much easier to implement tools for.

(*) I've heard algol 68, ansi common lisp, and pl/1 where harder to
implement than C++ . I don't know what the methods behind those
judgements were, however.



>
> 4. Lack of basic tools in the STL (eg; a hash based dictionary
> container, reference counting abstract base classes (preferably
> based on policy-driven smart pointers), LRU caching etc. etc). This
> stuff is hardly new technology - I threw together a generic hash
> based dictionary container in a few hours using map, vector and
> list. It's really not hard. And no, I don't like hash_map from the
> SGI library as it's not configurable enough.

Someday, you should get P.J.'s book _The Draft Standard C++ Library_,
so you can see what the STL replaced. The library described
therin (which AFAIK was the only other serious contender (*)) is
well-designed in its own way, but it was sparser than
the the library that we now have, largely due to the STL's richer
set of containers and algorithms. (The compatibility lost due to
the massive changes to the iostream library is IMO not the fault
of the STL.)

Somewhat seperatly It's easy to look at e.g. Java, and think 'my, the
C++ standard library is a desert' but it's important to remeber

(*) 'contender' in the sense that if the STL had not be voted into
the standard, we would likely have what Plauger described in that
book, though that is in most ways better described as a
predecessor.

>
> 5. Speaking of which, the STL in general annoys me.

Maybe you are mis-using it? In truth, all interfaces annoy, and it is
human nature to best recall those aspects of an interface which
annoy.

> I deliberately
> don't use large sections of it and I'm hardly alone here - a large
> minority of C++ programmers refuse to use the STL at all.

IMO, most of them are costing themselves time, costing their
co-workers time, and if you work with their code, costing you time.

> The thing
> which annoys me the most is the naming scheme - it consistently
> names the functions the least likely thing you'd think of and I
> personally can't see why lots more synonyms couldn't be added (eg;
> list<>::splice() could also be called list<>::relink() or craziest
> of all, list<>::move()).

I find splice() more natural than either relink() or move(), both of
which seem subtly wrong to me. I have to laugh at this particular
complaint of yours, because it seems to me there are many examples
(std::remove() anyone?) where almost everyone agrees the STL
picked the wrong name, but you chose a place where it picked the
right name.

It's this style issue and it being totally
> contrary to how I write my own C++ which makes me always
> uncomfortable using it, which is why I wrapped it with a thin
> veneer to provide Qt API emulations (see
> http://tnfox.sourceforge.net/TnFOX/html/group___q_t_l.html).

*Not* my idea of an improvement over the STL. If you'd like to see a
serious alternative to the STL, you might try the NTL. (Search for
it on sourceforge.)

However, it should be acknowledged that the STL strongly favors value
types, which is a huge advantage for many kinds of programs, but
is largely antithetical to certain common OO styles which
overwhelming demand entity types.

It would be quite useful to have a complementary library of
containers and algorithms strongly biased toward entity types.



>
> The language itself is generally good, though it has an unfortunate
> habit of requiring more keyboard typing to create better C++ which
> means laziness automatically causes bad C++. However its weakest point
> IMHO is the STL which if I had absolute control over the ISO
> standardisation process I'd personally start most of it again from
> scratch and supply thin veneers emulating the existing STL's API.

[snip]

What would this achieve? The standard only defines an interface.

I realize that those of you who never leave the bounds of Purified
Natural Filtered Sanctified OO find the STL's favortism towards
value types bizzare, confusing, and inappropriate, but those of
us who don't have entity types falling out of ears generally
appreciate it.

Again, it would be niced to see it improved in various ways, and nice
to see it complemented by something that better supported entity
types, but it shouldn't be removed or replaced.

Thorsten Ottosen

unread,
May 25, 2004, 12:02:10 PM5/25/04
to

From: "llewelly" <llewe...@xmission.dot.com>

| Rather, if the committee is going to spend any time on exception
| specifications, they should consider deprecating them.

changing the semantics from runtime-checks to no runtime-check would be good IMO.

| It would be quite useful to have a complementary library of
| containers and algorithms strongly biased toward entity types.

FYI I'm working on a set of adapters that makes it possible to store
pointers in std containers without using shared_ptr. Whether this is a good idea
or not remains to be seen.

What kind of algorithms are you talking about especially for entity types?

br

Thorsten

tom_usenet

unread,
May 25, 2004, 4:48:58 PM5/25/04
to
On Mon, 24 May 2004 14:29:39 CST, Niall Douglas
<s_googl...@nedprod.com> wrote:

>I personally feel that when producing language standards as with any
>standard, the format should be big release, then point 1 release and point
>2 release with point 2 released as the next big release is in preparation.
>ISO C++ has not followed this, much to its detriment :(

You appear to have missed the 2003 TC (or "point release"). There's
also the library TR coming out fairly soon. So you have a "bug fix"
point release and a new features point release. What more do you want?

>Now we have most major compilers nearly complying with the standard (I
>view "export" as not particularly useful/important if I understand the
>standard correctly),

I'd like to see what EDG do with their implementation before
commenting on this. I'm hoping for template "object code", which I
think may be possible.

we really could do with a point release of the
>standard to address urgent issues in a timely fashion. My top five most
>annoying features of C++ are:
>
>1. Lack of move constructors. It really, *really* annoys me to see the
>generated assembler for using say a string class and see loads of
>pointless copying. With move constructors the compiler can optimise the
>case when the object being copied is obviously about to get destroyed.
>Also, around 5% of the classes I design operate with move semantics which
>is a royal PITA when your copy constructor takes const references (either
>you const_cast or use mutable).

For now you have many options:

- Move objects with memcpy. This works fine when the object isn't
self-referencing (directly or indirectly), but is non-standard for
non-PODs of course. For the vast majority of "value" types (including
std::containers on implementations I know), memcpy works fine.

- Use mojo, or something similar.

- Just use the suboptimal copy/destruct.

- Implement move constructors on your favourite open source compiler.
:)

- Use default construction and std::swap.

>2. Lack of local functions. I use a lot of "undoable operations" to roll
>back partially completed work in case of an exception. I got the idea
>originally from Alexandrescu but I've improved on his original work and
>you can find them at
>http://tnfox.sourceforge.net/TnFOX/html/group__rollbacks.html. Sometimes
>the rollback can be quite complex and here you need local functions -
>unfortunately these are not permitted, so instead you declare a local
>class with a static member function but this furls up the source :(. If
>local classes with static member functions are possible, so are local
>functions! I'd particularly love if local functions had access to the
>scope in which they are defined.

This isn't too hard to do as long as the function doesn't have to
outlive the scope in which it is defined (which it doesn't in your use
case). I wonder whether there are any sample implementations out
there? I saw this:
http://people.debian.org/~aaronl/Usenix88-lexic.pdf. The problem comes
when someone tries to return a pointer to a local function from the
enclosing function...

>3. Removal of stupid and arbitrary limits. There are lots in here but
>thankfully most are obvious from reading through the standard. Off the top
>of my mind, why can't you define operator new & delete within a namespace
>(why aren't they just treated like normal functions with an irregular call
>syntax)?

They are operators, not functions. However, the name lookup of
operator new/delete could be improved, as you say.

> Why can't you template a typedef?

This is actually quite complicated, and there is no (or little)
implementation experience yet. The committee isn't a design team, it's
there to standardise existing practice.

Why doesn't the standard
>mandate marking in the mangled symbol of non-throwing functions so the
>compiler can warn at compile time when a function shouldn't be
>non-throwing or indeed when it could (it also helps the optimiser here)?

Compile time exception specification checking has been done to death.
It'll probably never happen.

>Why can't you initialise aggregate types with compile-time known functions
>(I'm thinking compile-time variable arrays here)?

What do you mean?

Why can't you mark
>virtual functions as being "once" in all subclasses (ie; if there are
>virtual functions A::foo() and B::foo and class C inherits A and B in that
>order, A's foo() points at B's foo() rather than there being two
>(ambiguous) foo()'s) - such a feature would greatly ease mixing run-time &
>compile-time polymorphism without introducing the costs of virtual base
>class inheritance? Why can't templates have parameterised pieces of code
>so we can finally get rid of macros?

Such things would just be macros by another name, surely. Unless you
have some suggestions for how name-lookup, scoping, export, etc. would
work with this? Would an improved preprocessor serve the same purpose?

Do you have an example of what you want to use this for? I don't
generally use macros to parametrise "code blocks", only when I need
__LINE__ and __FILE__ in debugging/logging really.

And why is the only way to determine
>an unknown type is via template argument deduction via a function which
>severely castrates your metaprogramming flexibility (and forces stilted
>design)?

You'll have to wait for this one, or use the "typeof" extensions that
a few compilers offer (which differs from what will be standardised).

I can go on, but the "D" programming language has many more good
>ideas here.
>
>4. Lack of basic tools in the STL (eg; a hash based dictionary container,
>reference counting abstract base classes (preferably based on
>policy-driven smart pointers), LRU caching etc. etc). This stuff is hardly
>new technology - I threw together a generic hash based dictionary
>container in a few hours using map, vector and list. It's really not hard.
>And no, I don't like hash_map from the SGI library as it's not
>configurable enough.

TR1 will have both of these.

>5. Speaking of which, the STL in general annoys me. I deliberately don't
>use large sections of it and I'm hardly alone here - a large minority of
>C++ programmers refuse to use the STL at all. The thing which annoys me
>the most is the naming scheme - it consistently names the functions the
>least likely thing you'd think of and I personally can't see why lots more
>synonyms couldn't be added

Code bloat?

>(eg; list<>::splice() could also be called
>list<>::relink() or craziest of all, list<>::move())

Splice is the traditional name for the operation, isn't it? The other
two are less clear to me at least. The classic example of bad naming
is "empty", but it takes all of 10 minutes to become familiar with the
naming conventions used...

. It's this style
>issue and it being totally contrary to how I write my own C++ which makes
>me always uncomfortable using it, which is why I wrapped it with a thin
>veneer to provide Qt API emulations (see
>http://tnfox.sourceforge.net/TnFOX/html/group___q_t_l.html).

That doesn't seem to offer splice under any name that I can see.

>The language itself is generally good, though it has an unfortunate habit
>of requiring more keyboard typing to create better C++ which means
>laziness automatically causes bad C++. However its weakest point IMHO is
>the STL which if I had absolute control over the ISO standardisation
>process I'd personally start most of it again from scratch and supply thin
>veneers emulating the existing STL's API. My new STL would be far more
>policy driven as the existing lack of same prevents usage of STL-provided
>functionality (eg; searching a list when the predicate isn't fixed nor
>simple).

The STL probably could do with a rewrite, but probably not for the
reasons you give. e.g. property maps, standardise segmented iterators,
etc. I think the existing policy set for the containers is generally
pretty good - I've never needed to configure them in any way not
supported.

>(Note that I am aware that some of my top five most annoying things about
>C++ either are being addressed or have been addressed in the next
>standard. What's wrong is that none of the above five items require much
>work - they are simple - and could be safely added to the standard within
>months rather than years.

Really? And who is going to do the work? What is going to be
standardized? Do you have a working implementation of the features you
suggest? Have lots of people used it and given you feedback?

> They would also require trivial quantities of
>additional work by compiler vendors. The fact they are not being applied
>with haste means millions of more lines of code must be written to
>circumvent them, certainly hundreds of thousands of more programmer hours
>must be wasted and I think this is very irresponsible).

Have you put any time into the standards process yourself? Do you know
the issues and processes involved in standardisation?

Tom
--
C++ FAQ: http://www.parashift.com/c++-faq-lite/
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html

Niall Douglas

unread,
May 25, 2004, 10:02:17 PM5/25/04
to
On Tue, 25 May 2004 03:56:06 +0000 (UTC), llewelly
<llewe...@xmission.dot.com> wrote:

>> Sure, the Unix method makes your job easier to produce something which
>> functions. However as anyone who has ever compared real-world
>> application performance between Windows and Unix realises, Unix GUI
>> applications just go slower - and I was even comparing two
>> applications both using Qt.
>
> In my experience, this has far more to do with the the multiple (and
> complicated!) layers of abstraction which come with the X window
> system. Note that it is generally true for staticly linked
> executables as well as dynamicly linked. Also, the opposite is
> generally true for non-gui applications, due to more aggressive fs
> caching. I'm not saying the unecessarily visible names which
> often result from the unix model (note, not always, they can be
> made not visible) and GOT lookups and such have no effect on
> performance; (they are a performance problem) but for most apps,
> I/O overwhelms most other factors.

I would agree - however, the X server is strongly decoupled from the
application and is far more efficient than Windows at avoiding unnecessary
GUI work. A lot of the time in my experience the X11 application just
trundles along and its GUI gets updated some time later.

However given your point, how about doxygen? Running the same on Linux and
Win32 it's at least twice as fast on Win32. Even including that doxygen is
running in VMWare which will come with some penalty (not much for CPU
bound tasks like doxygen), there's something wrong there.

Granted though it's getting better. GCC 3.4 produces noticeably better
quality code than v3.3 and earlier. PIC binaries are much smaller (by 50%
with TnFOX). And Linux 2.4 kernels aren't fast with memory mapped i/o (as
against FreeBSD or Windows) which will affect things too.

> What do you think TC1 is, if not the better-labeled equivalent of a
> point release?

I thought TC1 was no more than wording changes to the standard? ie; to fix
ambiguities and contradictions and no more? That isn't what I'd call a
point release, more a bugfix release.

>> My top five
>> most annoying features of C++ are:
>>
>> 1. Lack of move constructors. It really, *really* annoys me to see the
>> generated assembler for using say a string class and see loads of
>> pointless copying. With move constructors the compiler can optimise
>> the case when the object being copied is obviously about to get
>> destroyed. Also, around 5% of the classes I design operate with
>> move semantics which is a royal PITA when your copy constructor
>> takes const references (either you const_cast or use mutable).
>>
>> 2. Lack of local functions. I use a lot of "undoable operations" to
>> roll back partially completed work in case of an exception. I got
>> the idea originally from Alexandrescu but I've improved on his
>> original work and you can find them at
>> http://tnfox.sourceforge.net/TnFOX/html/group__rollbacks.html.
>> Sometimes
>> the rollback can be quite complex and here you need local functions
>
> IMO, 1. and 2. are serious changes. Do you think serious changes belong
> in
> point releases? I don't. Note, I'm in favor of move constructors,
> and possibly local functions (depending on semantics) as well.

Well, I suppose I'm basing it on my experience with GCC. Local functions
would be easy enough to add, move constructors also though I would have
difficulties with modifying the optimiser to fully make use of them. Of
course the GCC masters aren't keen on extra-specification extensions right
now.

For both the infrastructure is already there (copy constructors and static
member functions of local classes). On most compilers it even has a
specific warning that local functions are not supported if you try.

>> 3. Removal of stupid and arbitrary limits. There are lots in here but
>> thankfully most are obvious from reading through the standard. Off
>> the top of my mind, why can't you define operator new & delete
>> within a namespace (why aren't they just treated like normal
>> functions with an irregular call syntax)?
>
> A delete expression may choose the deallocation function (that's the
> user-definable operator delete) dynamicly, based on the runtime
> type of the deleted object. So a user-defined operator delete
> within a namespace would naturally apply to all *types* declared
> in said namespace, *not* to all delete-expressions in said
> namespace.

I'm not sure I get you. Surely Koenig lookup takes care of finding which
operator delete to use? After that, if the type has a virtual destructor
you call that, else you generate (or call) a suitable destructor based
upon your static knowledge of the type?

> A desireable feature, IMO, but it potentially comes
> with some difficult design work, and, most importantly, design
> work that can't assume a delete-expression is just 'irregular
> call syntax' - the semantics for what function is called
> are also different, and a delete-expression does more than just
> call the deallocation function anyway.

I was thinking basically that if you define a "namespace Secure" and
within that namespace new & delete work with a special secure heap it
would be most useful. Interestingly I didn't know this wasn't possible
originally and it worked fine in MSVC and nearly in GCC (occasionally it
got the wrong delete). Maybe this latter problem is what you refer to?
Only ICC barfed with the correct error that this was not supported by the
standard.

>> Why doesn't the standard mandate marking in the mangled
>> symbol of non-throwing functions so the compiler can warn at
>> compile time when a function shouldn't be non-throwing or indeed
>> when it could (it also helps the optimiser here)?
>
> *shrug* C++ exception specifications seem broken beyond all repair,
> and in any case, it seems most all C++ programmers who use
> exceptions are focused on using them for non-local error
> handling. Why complicate their lives with a feature designed for
> local error handling? If C++ is to gain a feature to force
> specified errors to be handled locally, it shouldn't have
> anything to do with exceptions.
>
> Rather, if the committee is going to spend any time on exception
> specifications, they should consider deprecating them. Probably,
> this either isn't feasible, or isn't worth the effort, so the
> committee's time IMO seems better spent on other things.

I quite like the "throws nothing" specifier. It's the only one I use. All
the others from my examination of the generated assembler seem to generate
type checking code which is inefficient.

> If the committee is concerned with making exceptions *safer*, I would
> aprreciate it if they would ask if the programmer can readily provide
> enough information so that the implementation can diagnose simple
> but common violations of exception guarantees.

One thing I really dislike (and I should have included it in my top five,
but I forgot) is what the standard mandates if you throw an exception
while you're in the middle of handling one. That means programmer error
causes an immediate exit of the process with no chance to clean up (I
don't view terminate_handler() as a realistic way of properly cleaning up
a large multithreaded application!)!

I in TnFOX implemented a complex but efficient mechanism for nested
exception handling. If you throw while one is being handled, it's still a
fatal exception but the stack gets fully unwound and all threads get a
chance to unwind theirs too. Best of all, it gets to save information so
even in release builds a decent error report can be given detailing what
went wrong and where. I've grown so used to it that I forget this original
limitation nowadays :)

My C++ is written expecting that any line can throw any exception unless
marked with "throw()". This I've found means you must design and write
code in quite a different fashion than without this constraint - OTOH, you
get code which correctly handles any situation no matter how esoteric.
However it does mean that my code tends to use lots of exceptions - for
indicating parameters are out of bounds for example - and my exception
base class is very "fat". And the numerous use of exceptions causes
especial problems for destructors as it's very easy to throw unwittingly
when a destructor is running higher up the call stack which can raise
fatal exit time again!

If one is to write secure & robust applications in C++ then these
misfeatures can cause you a lot of hassle - getting my nested exception
handling framework working was hard. Certainly more hassle than it could
have with slightly improved wording of the standard.

>> Why can't you
>> initialise aggregate types with compile-time known functions (I'm
>> thinking compile-time variable arrays here)?
>
> What do you mean by this? Surely your compiler supports dynamic
> initialization

Well I'll tell you what I was doing. I wanted a metaprogramming class
which generates a jump table for a typelist so that a run-time index into
the typelist will call code specialised for that type. I couldn't get
better than a 16 member static const array of function pointers each of
which was initialised by a template which specialised a supplied template
with that type. While another bit of metaprogramming could assemble as
many chunks of 16 as required I couldn't get them to get stored
consecutively and thus would have to have a series of if() statements
choosing which table to use (or I could have a jump table jumping into
jump tables). Ideally I'd want to generate an array of arbitrary length
initialised with an arbitrary number of function pointers but the
aggregate initialiser syntax won't help you here. Non-standard GCC (and I
think MSVC) extensions will though so it can't be hard to implement.

>> I deliberately
>> don't use large sections of it and I'm hardly alone here - a large
>> minority of C++ programmers refuse to use the STL at all.
>
> IMO, most of them are costing themselves time, costing their
> co-workers time, and if you work with their code, costing you time.

I agree that the benefits of everyone using the same library are
undervalued. However that said, I personally tend to use something if it
seems to me to add value. If it doesn't, I don't use it.

A good example here is choice of metaprogramming library. Dave Abrahams
tried his best to get me to use Boost but I instead went with writing my
own. It's not that I have anything against Boost apart from it
unfortunately replicating the style of the STL, but my own library is much
easier to use and less obtuse as I can mandate use of a partial template
specialisation supporting compiler.

>> The language itself is generally good, though it has an unfortunate
>> habit of requiring more keyboard typing to create better C++ which
>> means laziness automatically causes bad C++. However its weakest point
>> IMHO is the STL which if I had absolute control over the ISO
>> standardisation process I'd personally start most of it again from
>> scratch and supply thin veneers emulating the existing STL's API.
> [snip]
>
> What would this achieve? The standard only defines an interface.

It would achieve a migration path onto a better standard template library.

> I realize that those of you who never leave the bounds of Purified
> Natural Filtered Sanctified OO find the STL's favortism towards
> value types bizzare, confusing, and inappropriate, but those of
> us who don't have entity types falling out of ears generally
> appreciate it.

Heh, I am one of the few programmers I know of who dislikes OO and thinks
it a bad thing.

> Again, it would be niced to see it improved in various ways, and nice
> to see it complemented by something that better supported entity
> types, but it shouldn't be removed or replaced.

I'm afraid I don't know what an entity type is. What I would like to see
in a replacement STL is use of traits so metaprogramming can far better
optimise operation, a layered hierarchy of implementation by inheritance
(policies) so it's much easier to reuse parts of a STL container
implementation but reimplement other parts (this is hard right now) and
better genericity eg; iterators which operate the same no matter which
container they belong to (eg; list<>::iterator could be +='d and -='d just
like a vector<>::iterator). I would also redo algorithms as worker classes
rather than supposedly generic functions eg; ContainerSorter which when
instantiated with a vector<> instance provides the "perfect" stable and
non-stable sorts etc. but can also be subclassed to customise it further.

I vaguely remember Alexandrescu saying he was working on a STL
reimplementation called YASLI. Did anything ever come of that?

Cheers,
Niall

Wil Evers

unread,
May 25, 2004, 10:45:50 PM5/25/04
to
In article <opr8ikyh...@news.iol.ie>, Niall Douglas wrote:

> Though I cannot speak for them, I cannot see Microsoft up-ending the
> Windows DLL mechanism just for ISO C++ compliance. DLL's are absolute core
> to Windows because unlike Unix it was designed from the beginning with
> them in mind. If the ISO C++ spec adopts a mechanism incompatible with
> Win32, then that feature will likely be ignored on the vast majority of
> the world's computers.

Obviously Microsoft isn't going to break the current DLL linking and loading
machinery. However, I see no reason why Microsoft wouldn't be able to
*extend* it to support other (and more C++-friendly) approaches to dynamic
linking.

- Wil

--
Wil Evers, DOOSYS R&D BV, Utrecht, Holland
[Wil underscore Evers at doosys dot com]

Davide Bolcioni

unread,
May 25, 2004, 10:46:35 PM5/25/04
to
Niall Douglas ha scritto:

> On Sun, 23 May 2004 00:18:04 +0000 (UTC), Davide Bolcioni wrote:

>> the notion that shared libraries are
>> not something which should concern the language, only the linker, made
>> my job exceedingly easier.
>
>
> Sure, the Unix method makes your job easier to produce something which
> functions. However as anyone who has ever compared real-world
> application performance between Windows and Unix realises, Unix GUI
> applications just go slower - and I was even comparing two applications
> both using Qt.

I cannot comment on that, as my exposure to Qt was limited; my personal
impression on the old Athlon 700 I'm currently using is that Linux is
perceptibly faster, so your mileage may vary.

> If you go the Unix route, code *quality* suffers because it's much
> easier for the programmer to be lazy. If you go the Windows route, you
> get as perfect quality as you can for free with slightly more work. You
> already know which I prefer but I'll elaborate below.

I disagree, see below.

>>> BTW in N1496 appendix item 4 I can confirm that the Unix mechanism
>>> cannot be implemented on Windows.
>>
>> Given the time scale expected of C++0x, maybe the right question is
>> whether it is implementable for .NET ?
>
> I personally feel that when producing language standards as with any
> standard, the format should be big release, then point 1 release and
> point 2 release with point 2 released as the next big release is in
> preparation. ISO C++ has not followed this, much to its detriment :(

Point release or major release, it is due in a few years; I expect Win32
to be legacy by then, in the sense Win16 was, and that was the point of
my remark. We might be saddling C++0x with a solution for a problem
which solved itself, and leave it unable to express a solution for a
real problem of its time. Furthermore, we have C99 which is similarly
silent on the matter and will be for a few more years.

> Now we have most major compilers nearly complying with the standard ...
[cut]

Interesting points, each probably deserves its own thread so I'll not
elaborate here.

>> This should be of interest only to people implementing a linker, in my
>> opinion.
>
> Though I cannot speak for them, I cannot see Microsoft up-ending the
> Windows DLL mechanism just for ISO C++ compliance. DLL's are absolute
> core to Windows because unlike Unix it was designed from the beginning
> with them in mind.

I agree this is unlikely. Any such matter would be addressed by the
compiler, as they're now.

> If the ISO C++ spec adopts a mechanism incompatible
> with Win32, then that feature will likely be ignored on the vast
> majority of the world's computers.

The C++ standard currently ignores this peculiar behavior of a widespread
platform and this has not hindered its success on said platform; the
problem is not "C++ does not work on Windows DLLs" so I see no need to
solve that. Unless I mistake your arguments, the problem is "the Unix
implementation is slow".

>> In other words: we don't tell the programmer how the program will be
>> cobbled together, he had better make sure it works anyway. Guidelines as
>> to which tricks not to pull in order to not have things break
>> unexpectedly would be really welcome and in my humble opinion belong in
>> a language standard: FORTRAN got much mileage from its rules for common
>> variable access, for example.
>
> The trouble with this approach is that it produces sub-optimal code. If
> the compiler cannot know if a symbol is local to this DSO/DLL or not, it
> must assume the worst and run *every* *single* *access* through an
> indirection lookup. This is why code compiled with GCC using -fPIC is
> markedly slower than code compiled without and it's also more bloated.

This might sound blunt, but it is probably a problem with GCC. If I
recall my education correctly, what you plan to achieve goes under the
name of "whole program optimization"; in other words, "gcc -c" might
produce an "unfinished" object file where this choice of access is
deferred, to be resolved when "gcc" is invoked for the final link step
just before handling .o files to the linker (nobody said you have to
give the linker exactly the same .o files you started with). Since the
decisions in this step are C++ specific, it is the compiler's job,
not the linker's.

> This is partially where the speed & size differential between MSVC & GCC
> compiled binaries comes from. The trouble with "default shared" is that
> it encourages laziness - most programmers will do the minimum to get a
> working application and then stop. If it's "default private" then
> programmers must add the extra annotation to even reach a working binary
> - which is good, because code quality improves substantially.

In C, to hide a symbol you make it static. In C++ we have the unnamed
namespace and access specifiers for classes; this might or might not
be enough to provide the information you need for faster binaries, so
it might prove helpful to address this in the standard along the lines
of "if you do such and such, the translator might be forced to assume
that and produce underperforming code".

In other words, C/C++ programmers already take a visibility decision in
the code, using the language-provided means which the underlying
implementation is required to support.

> I also feel that whether something is visible outside its DSO/DLL is
> part of its API spec and thus interface contract. If you disagree with
> this, consider how public/protected/private relate to class design and
> note that similar logic applies to both.

It seems to me that you are unwilling to abstract from the concrete
implementation; if what you're after is a further layer of
modularization, you might want to research "module" concepts such as
those found in Modula languages or the "package" concept of Java.

Suppose C++ introduced class support by extending the C function syntax
like this:

void f() public class the_class_f_is_public_member_of { ... }

thus achieving what we might call "piecemeal object orientation"; in my
view, your proposal parallels the above by introducing "piecemeal
modularization". Actually, this is a problem with N1496 too.

>> On the contrary, I suggest leaving the language as it stands; just
>> acknowledge that shared libraries are a reality of current
>> implementations and might impose a series of restrictions which should
>> be observed by the programmer.
>
> The "stick head in sand" approach is only useful when a technology is
> brand new and the consequences of misapplying it may be dangerous.
> Shared libraries have been in widespread use for fifteen years - we know
> them well enough now that not acting just makes things worse.

I am not suggesting that the issue should be ignored; it is my point of view
that a specific syntactic construct to address it on a symbol by symbol
basis is inappropriate. A "modules for C++" proposal might be a more
appropriate solution for your problem; improving the compiler might also
provide most of the benefits you're after, and rephrasing the standard
to address the issue explicitly might also help a lot.

Please note that this is as much an objection to N1496 in its present
form as to the choice of making "hidden" the default visibility. In
fact, on a second reading of N1496 it seems to me that the missing
term for "linkage unit" which troubles the author might be "module".

Thank you for your attention,
Davide Bolcioni
--
Linux - the choice of a GNU generation.

Howard Hinnant

unread,
May 26, 2004, 12:15:06 AM5/26/04
to
In article <tb36b0h3ef6j2k8pq...@4ax.com>,
tom_u...@hotmail.com (tom_usenet) wrote:

> >1. Lack of move constructors. It really, *really* annoys me to see the
> >generated assembler for using say a string class and see loads of
> >pointless copying. With move constructors the compiler can optimise the
> >case when the object being copied is obviously about to get destroyed.
> >Also, around 5% of the classes I design operate with move semantics which
> >is a royal PITA when your copy constructor takes const references (either
> >you const_cast or use mutable).
>
> For now you have many options:

Indeed, too many options. ;-)



> - Move objects with memcpy. This works fine when the object isn't
> self-referencing (directly or indirectly), but is non-standard for
> non-PODs of course. For the vast majority of "value" types (including
> std::containers on implementations I know), memcpy works fine.

Count Metrowerks node-based containers out. They've been self
referencing for five or so years. I believe gcc 3.4 node based
containers are now self referencing, pretty sure about std::list, not as
sure about the others. I also believe Dinkumware's std::string is self
referencing, but again I'm not positive about that. So be extra careful
before you memcpy something you're not intimately familiar with. I
might wager everyone's vector is movable with memcpy, but I'm not sure
how many other std::containers you could say that about.



> - Use mojo, or something similar.

A valiant attempt at move within the current language. It is not
without drawbacks such as disabling RVO (the ultimate in move semantics).

> - Use default construction and std::swap.

Suboptimal for heavy weight objects that are movable with memcpy. Very
suboptimal for pods. Doesn't work at all for types without default
ctors (e.g. std::containers don't require their value_type to have a
default ctor).

> - Just use the suboptimal copy/destruct.

I.e. use copy semantics instead of move semantics.

And when writing generic code, how do you know which option is best? Or
even viable? Today, only the last option is always viable in generic
code (use copy semantics).

I agree with Niall. Lack of move semantics is a /major/ drawback in
today's C++.

-Howard

llewelly

unread,
May 26, 2004, 12:20:15 PM5/26/04
to

===================================== MODERATOR'S COMMENT:
Please take the VMware discussion to private email.


===================================== END OF MODERATOR'S COMMENT
s_googl...@nedprod.com (Niall Douglas) writes:
[snip]


> However given your point, how about doxygen? Running the same on Linux
> and Win32 it's at least twice as fast on Win32. Even including that
> doxygen is running in VMWare which will come with some penalty (not

doxygen running in VMWare? Why? If so, it makes nonsense of your
comparison; VMWare has almost no overhead for some ops but very high
overhead for others.

[snip]


>> What do you think TC1 is, if not the better-labeled equivalent of a
>> point release?
>
> I thought TC1 was no more than wording changes to the standard? ie; to
> fix ambiguities and contradictions and no more? That isn't what I'd
> call a point release, more a bugfix release.

I had thought, up to this point, that point release was a synonym for
bugfix relase. Apparently, you think it is something else.

>
>>> My top five
>>> most annoying features of C++ are:
>>>
>>> 1. Lack of move constructors. It really, *really* annoys me to see the
>>> generated assembler for using say a string class and see loads of
>>> pointless copying. With move constructors the compiler can optimise
>>> the case when the object being copied is obviously about to get
>>> destroyed. Also, around 5% of the classes I design operate with
>>> move semantics which is a royal PITA when your copy constructor
>>> takes const references (either you const_cast or use mutable).
>>>
>>> 2. Lack of local functions. I use a lot of "undoable operations" to
>>> roll back partially completed work in case of an exception. I got
>>> the idea originally from Alexandrescu but I've improved on his
>>> original work and you can find them at
>>> http://tnfox.sourceforge.net/TnFOX/html/group__rollbacks.html. Sometimes
>>> the rollback can be quite complex and here you need local functions
>>
>> IMO, 1. and 2. are serious changes. Do you think serious changes
>> belong in
>> point releases? I don't. Note, I'm in favor of move constructors,
>> and possibly local functions (depending on semantics) as well.
>
> Well, I suppose I'm basing it on my experience with GCC.

It's a major change to the standard because, in order to be useful,
it needs to be extensively used in and by the standard library.

> Local
> functions would be easy enough to add, move constructors also though I
> would have difficulties with modifying the optimiser to fully make use
> of them. Of course the GCC masters aren't keen on extra-specification
> extensions right now.

Easy to add to the core language, maybe. Easy to add to all the
appropriate spots in the standard library? I don't think so.

>
> For both the infrastructure is already there (copy constructors and
> static member functions of local classes). On most compilers it even
> has a specific warning that local functions are not supported if you
> try.
>
>>> 3. Removal of stupid and arbitrary limits. There are lots in here but
>>> thankfully most are obvious from reading through the standard. Off
>>> the top of my mind, why can't you define operator new & delete
>>> within a namespace (why aren't they just treated like normal
>>> functions with an irregular call syntax)?
>>
>> A delete expression may choose the deallocation function (that's the
>> user-definable operator delete) dynamicly, based on the runtime
>> type of the deleted object. So a user-defined operator delete
>> within a namespace would naturally apply to all *types* declared
>> in said namespace, *not* to all delete-expressions in said
>> namespace.
>
> I'm not sure I get you. Surely Koenig lookup takes care of finding
> which operator delete to use?

Not for types with virtual destructors. See 12.5/4:

# [...] if the delete-expression is used to deallocate a class
# object whose static type has a virtual destructor, the
# deallocation function is the one found by the lookup in the
# definition of the dynamic type's virtual destructor
# (12.4). 104) [...]


> After that, if the type has a virtual
> destructor you call that, else you generate (or call) a suitable
> destructor based upon your static knowledge of the type?

The destructor is called before the deallocation function.

>
>> A desireable feature, IMO, but it potentially comes
>> with some difficult design work, and, most importantly, design
>> work that can't assume a delete-expression is just 'irregular
>> call syntax' - the semantics for what function is called
>> are also different, and a delete-expression does more than just
>> call the deallocation function anyway.
>
> I was thinking basically that if you define a "namespace Secure" and
> within that namespace new & delete work with a special secure heap it
> would be most useful. Interestingly I didn't know this wasn't possible
> originally and it worked fine in MSVC and nearly in GCC (occasionally
> it got the wrong delete).

With GCC, the namespaced deallocation function is called if the
delete-expression is used to deallocate an object whose static
type has a virtual destructor. Otherwise, it is ignored.

>>> Why doesn't the standard mandate marking in the mangled
>>> symbol of non-throwing functions so the compiler can warn at
>>> compile time when a function shouldn't be non-throwing or indeed
>>> when it could (it also helps the optimiser here)?
>>
>> *shrug* C++ exception specifications seem broken beyond all repair,
>> and in any case, it seems most all C++ programmers who use
>> exceptions are focused on using them for non-local error
>> handling. Why complicate their lives with a feature designed for
>> local error handling? If C++ is to gain a feature to force
>> specified errors to be handled locally, it shouldn't have
>> anything to do with exceptions.
>>
>> Rather, if the committee is going to spend any time on exception
>> specifications, they should consider deprecating them. Probably,
>> this either isn't feasible, or isn't worth the effort, so the
>> committee's time IMO seems better spent on other things.
>
> I quite like the "throws nothing" specifier. It's the only one I
> use. All the others from my examination of the generated assembler
> seem to generate type checking code which is inefficient.

True, but not the only thing that is wrong with exception
specifications.

>
>> If the committee is concerned with making exceptions *safer*, I would
>> aprreciate it if they would ask if the programmer can readily provide
>> enough information so that the implementation can diagnose simple
>> but common violations of exception guarantees.
>
> One thing I really dislike (and I should have included it in my top
> five, but I forgot) is what the standard mandates if you throw an
> exception while you're in the middle of handling one. That means
> programmer error causes an immediate exit of the process with no
> chance to clean up (I don't view terminate_handler() as a realistic
> way of properly cleaning up a large multithreaded application!)!

If cleanup results in an error, can you safely clean up? The standard
assumes not.

There are surely applications for which this isn't true, but I think
they are a minority. I don't know if that minorty is important
enough to justifty the work necessary to define and implement
something smarter.

[snip]

>>> The language itself is generally good, though it has an unfortunate
>>> habit of requiring more keyboard typing to create better C++ which
>>> means laziness automatically causes bad C++. However its weakest point
>>> IMHO is the STL which if I had absolute control over the ISO
>>> standardisation process I'd personally start most of it again from
>>> scratch and supply thin veneers emulating the existing STL's API.
>> [snip]
>>
>> What would this achieve? The standard only defines an interface.
>
> It would achieve a migration path onto a better standard template
> library.

Sorry, I misunderstood your original statement; I didn't realize you
intended to provide a new library, in addition to the existing
STL.

>
>> I realize that those of you who never leave the bounds of Purified
>> Natural Filtered Sanctified OO find the STL's favortism towards
>> value types bizzare, confusing, and inappropriate, but those of
>> us who don't have entity types falling out of ears generally
>> appreciate it.
>
> Heh, I am one of the few programmers I know of who dislikes OO and
> thinks it a bad thing.

Obviously my biases are leading me to make false assumptions
again. :-)

>
>> Again, it would be niced to see it improved in various ways, and nice
>> to see it complemented by something that better supported entity
>> types, but it shouldn't be removed or replaced.
>
> I'm afraid I don't know what an entity type is.

An entity has identity. Even if you and I both have the same make,
year, model, and options on a car, my car is not your
car. Examples include bikes, shoes, ostreams, etc.

An entity type is a type you instantiate to get an entity.

An example of a non-entity type is an int. An int is a value type;
when dealing with ints, any two ints which have the value 5 are
the same.

[snip]

> I vaguely remember Alexandrescu saying he was working on a STL
> reimplementation called YASLI. Did anything ever come of that?

http://www.moderncppdesign.com/code/main.html

Dietmar Kuehl

unread,
May 26, 2004, 2:01:52 PM5/26/04
to
Niall Douglas <s_googl...@nedprod.com> wrote:
> 4. Lack of basic tools in the STL (eg; a hash based dictionary container,
> reference counting abstract base classes (preferably based on
> policy-driven smart pointers), LRU caching etc. etc).

Apparently you are confusion "STL" with the standard C++ library: the STL
is, essentially, a library of algorithms plus a few supporting classes
(eg. iterators and functors) and a few usage examples (ie. the containers).
STL is all about sequences where a "reference counting astract base class"
does not really fit.

Hashes can and have been added easily using third party providers or
extensions to standard libraries. Actually, STL is all about algorithms for
an open set of sequences. There is nothing wrong with this approach IMO: you
wouldn't want C++ to ship with a particular database either. There is nothing
different in view to having advanced containers, added algorithms, etc. Still,
hashes (under the awkward name "unorder_associative_...") are on the way into
the standard, as are smart pointers for reference counting (which is a much
more sensible approach than having some form of base class, IMO).

> The language itself is generally good, though it has an unfortunate habit
> of requiring more keyboard typing to create better C++ which means
> laziness automatically causes bad C++. However its weakest point IMHO is
> the STL which if I had absolute control over the ISO standardisation
> process I'd personally start most of it again from scratch and supply thin
> veneers emulating the existing STL's API.

STL is C++'s strongest point - despite being wrong: the concepts STL is
based on are to course-grained. This is my favorite complaint: iterators
combine two distinct concepts, namely cursors for positioning and property
maps for attribute access (BTW, an implementation of algorithms based on
property maps is available from <http://www.dietmar-kuehl.de/cool/>). I'm
not aware of any other language which ships with an extensive library of
algorithms being efficient while also being applicable to [nearly] arbitrary
sequences (the restriction is due to being based on the wrong concepts, of
course :-)

> My new STL would be far more
> policy driven as the existing lack of same prevents usage of STL-provided
> functionality (eg; searching a list when the predicate isn't fixed nor
> simple).

I'd say you should pick up a good book about STL and try to understand what
it is all about...

> (Note that I am aware that some of my top five most annoying things about
> C++ either are being addressed or have been addressed in the next
> standard. What's wrong is that none of the above five items require much
> work - they are simple - and could be safely added to the standard within
> months rather than years. They would also require trivial quantities of
> additional work by compiler vendors. The fact they are not being applied
> with haste means millions of more lines of code must be written to
> circumvent them, certainly hundreds of thousands of more programmer hours
> must be wasted and I think this is very irresponsible).

My short-term memory is extremely bad but I can't remember having met you
at the last few committee meetings: with your claimed efficiency you should
be able to really speed things up. For some non-obvious reason the progress
made without your excellent help is much slower. A possible reason may be
that those investing in standardization also have other stuff to do. Your
accused irresponsiblity of the standardization process is possibly due to
the fact that there are only few people who have the time to provide quality
papers on the various issues and move the working groups through all those
details swiftly.
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.contendix.com> - Software Development & Consulting

Pavel Vozenilek

unread,
May 26, 2004, 3:04:47 PM5/26/04
to

"Niall Douglas" <s_googl...@nedprod.com> wrote:


> I vaguely remember Alexandrescu saying he was working on a STL
> reimplementation called YASLI. Did anything ever come of that?
>

He want to create STL implementation - interfaces would remain the same
but inner parts will be cleaner/faster/using advanced techniques/etc.

/Pavel

Francis Glassborow

unread,
May 26, 2004, 11:22:32 PM5/26/04
to
In article <opr8kly6...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>I'm not sure I get you. Surely Koenig lookup takes care of finding
>which operator delete to use? After that, if the type has a virtual
>destructor you call that, else you generate (or call) a suitable
>destructor based upon your static knowledge of the type?


In file f1.cpp

namespace local_new {
// contains its overrider or overload of operator new and delete
}

mytype * i_ptr = new mytype;

In file f2.cpp

extern mytype * i_ptr;

namespace local_new {
delete i_ptr;
}

is just one example of a multitude of possible problems. Unlike classes
where the compiler sees all the declarations in the class definition
there is no requirement that the compiler sees all the declarations in a
namespace in any particular translation unit.

When you are annoyed by something in C++ it is usually a good idea to
discover why it is the way it is. We (I was part of the group whose
responsibility included this part of the core language) very carefully
considered issues of overloading and overriding (i.e. replacing)
operator new and operator delete at namespace scope and concluded that
with the C++ separate compilation model it was impossible to implement
which is why it is not allowed.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Niall Douglas

unread,
May 26, 2004, 11:26:12 PM5/26/04
to
On Wed, 26 May 2004 02:46:35 +0000 (UTC), Davide Bolcioni
<6805b...@sneakemail.com> wrote:

>> The trouble with this approach is that it produces sub-optimal code.
>> If the compiler cannot know if a symbol is local to this DSO/DLL or
>> not, it must assume the worst and run *every* *single* *access* through
>> an indirection lookup. This is why code compiled with GCC using -fPIC
>> is markedly slower than code compiled without and it's also more
>> bloated.
>
> This might sound blunt, but it is probably a problem with GCC. If I
> recall my education correctly, what you plan to achieve goes under the
> name of "whole program optimization"; in other words, "gcc -c" might
> produce an "unfinished" object file where this choice of access is
> deferred, to be resolved when "gcc" is invoked for the final link step
> just before handling .o files to the linker (nobody said you have to
> give the linker exactly the same .o files you started with). Since the
> decisions in this step are C++ specific, it is the compiler's job,
> not the linker's.

Not exactly. How ELF does shared libraries means that the app can load a
shared library and symbols from that can overwrite symbols currently
defined eg; printf() calls one implementation before and then calls a
different one thereafter. This requires the compiler to always have at
least the equivalent overhead of a virtual function call. Even with WPO
you can't elide this - to do so breaks things unless you assume no new
libraries will be loaded.

Note that for every symbol which is available outside its DSO, a
particularly expensive form of relocation must also be performed at load
time which on all implementations I know of is quadratic to symbol number.
See Ulrich Drepper's "How to write shared libraries".

>> This is partially where the speed & size differential between MSVC &
>> GCC compiled binaries comes from. The trouble with "default shared" is
>> that it encourages laziness - most programmers will do the minimum to
>> get a working application and then stop. If it's "default private" then
>> programmers must add the extra annotation to even reach a working
>> binary - which is good, because code quality improves substantially.
>
> In C, to hide a symbol you make it static. In C++ we have the unnamed
> namespace and access specifiers for classes; this might or might not
> be enough to provide the information you need for faster binaries, so
> it might prove helpful to address this in the standard along the lines
> of "if you do such and such, the translator might be forced to assume
> that and produce underperforming code".
>
> In other words, C/C++ programmers already take a visibility decision in
> the code, using the language-provided means which the underlying
> implementation is required to support.

No you're wrong here. A static symbol is indeed hidden in that it cannot
be directly referenced outside its compiland. This is not true for the
contents of unnamed namespaces - they merely add extra mangling to the
symbol which actually makes the problem worse (longer load & linking times
and more code bloat). Why don't unnamed namespaces hide their contents?
Because "export" causes searching of unnamed namespaces outside its
compiland!

Despite the C++ standard saying that static variable and function
declarations are deprecated, they still produce better quality code on
current compilers. I personally use them a lot.

>> I also feel that whether something is visible outside its DSO/DLL is
>> part of its API spec and thus interface contract. If you disagree with
>> this, consider how public/protected/private relate to class design and
>> note that similar logic applies to both.
>
> It seems to me that you are unwilling to abstract from the concrete
> implementation; if what you're after is a further layer of
> modularization, you might want to research "module" concepts such as
> those found in Modula languages or the "package" concept of Java.

Unwilling is unfair - incapable is probably more accurate. I either see
things in very low level terms or far too abstract (eg; an abstract
description of a program seems incomplete to me without including a
description of its programmers and users and cultural & societal context).

I view DLLs/DSOs as as fundamental mechanism of compartmentalising code as
classes. But anyone who likes OO won't agree with me because to an OO mind
such divisions are not considered important (and therefore SHOULDN'T be
important). This kind of mental blinkering is precisely why I don't like
OO - it encourages bad practice.

>> The "stick head in sand" approach is only useful when a technology is
>> brand new and the consequences of misapplying it may be dangerous.
>> Shared libraries have been in widespread use for fifteen years - we
>> know them well enough now that not acting just makes things worse.
>
> I am not suggesting that the issue should be ignored; it is my point of
> view
> that a specific syntactic construct to address it on a symbol by symbol
> basis is inappropriate. A "modules for C++" proposal might be a more
> appropriate solution for your problem; improving the compiler might also
> provide most of the benefits you're after, and rephrasing the standard
> to address the issue explicitly might also help a lot.

No I'd be against modules like say python does them (which is what I think
you're proposing). But why am I? I'm going entirely on gut reaction here -
somehow it seems ill-suited to a static & compiled language - I guess I'm
inferring that because I've only seen module systems on dynamic languages
(I mean here languages which perform some or all of their language
processing at run-time eg; objective C) that therefore there must be a
reason why they don't work so well for a language like C++. Also I like
how right now things can be subdivided one way at the source level (via
namespaces) but a totally different way at binary level - to me, this ADDS
value through intuitiveness.

If you could show me more proposed syntax, I could say a lot more. While
I'm initially negative, I could be very easily swayed as TBH I don't have
enough experience of what you propose would mean.

Cheers,
Niall

Niall Douglas

unread,
May 26, 2004, 11:27:10 PM5/26/04
to
On Wed, 26 May 2004 18:01:52 +0000 (UTC), Dietmar Kuehl
<dietma...@yahoo.com> wrote:

> My short-term memory is extremely bad but I can't remember having met you
> at the last few committee meetings: with your claimed efficiency you
> should
> be able to really speed things up. For some non-obvious reason the
> progress
> made without your excellent help is much slower. A possible reason may be
> that those investing in standardization also have other stuff to do. Your
> accused irresponsiblity of the standardization process is possibly due to
> the fact that there are only few people who have the time to provide
> quality
> papers on the various issues and move the working groups through all
> those
> details swiftly.

Oh don't get me wrong, I've never participated and I deliberately avoid
having anything to do with groups like this (I've already been here longer
than I intended). But in a way, that's precisely where the value in what I
think comes from - I bring a fresh viewpoint direct from the coal-face
which you may or may not already have enough of.

As with any standardisation process, much of it will be driven by lowest
common denominator, horse-trading between vested interests and a real
paucity of independent funding. But then cooperation with mutually hostile
interests will always be glacially paced and after all, now that C++ has
clearly taken the crown of most popular platform-independent programming
language from C there are in many ways more reasons for it to go slower
rather than faster.

What I am suggesting though is a "fast track" mechanism for getting new
features which absolutely everyone agrees upon into the standard quickly
(six months) rather than having to wait to be bundled with more
contentious features. This carries dangers proportionate to the size of
the feature so should only be allowed for small features eg; move
constructors.

I'm hardly the first to call for such a thing I'm sure - I'm merely saying
"from the coal-face" that certain pretty obvious minor additions to the
feature set are urgently required. The powers that be can take that or
leave it - after all, there is no viable alternative to C++ available in
the area it covers so I nor anyone else really has a choice.

Cheers,
Niall

Gabriel Dos Reis

unread,
May 27, 2004, 3:19:25 AM5/27/04
to
dietma...@yahoo.com (Dietmar Kuehl) writes:

[...]

| > The language itself is generally good, though it has an unfortunate habit
| > of requiring more keyboard typing to create better C++ which means
| > laziness automatically causes bad C++. However its weakest point IMHO is
| > the STL which if I had absolute control over the ISO standardisation
| > process I'd personally start most of it again from scratch and supply thin
| > veneers emulating the existing STL's API.
|
| STL is C++'s strongest point - despite being wrong: the concepts STL is
| based on are to course-grained. This is my favorite complaint: iterators
| combine two distinct concepts, namely cursors for positioning and property
| maps for attribute access (BTW, an implementation of algorithms based on
| property maps is available from <http://www.dietmar-kuehl.de/cool/>). I'm
| not aware of any other language which ships with an extensive library of
| algorithms being efficient while also being applicable to [nearly] arbitrary
| sequences (the restriction is due to being based on the wrong concepts, of
| course :-)

So, did you mean that if the STL was based on the "right" concepts,
it would not have been as extensive and efficient? :-) :-)

--
Gabriel Dos Reis
g...@cs.tamu.edu
Texas A&M University -- Computer Science Department
301, Bright Building -- College Station, TX 77843-3112

Francis Glassborow

unread,
May 27, 2004, 12:59:35 PM5/27/04
to
In article <opr8mjlc...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>What I am suggesting though is a "fast track" mechanism for getting new
>features which absolutely everyone agrees upon into the standard
>quickly (six months) rather than having to wait to be bundled with more
>contentious features. This carries dangers proportionate to the size of
>the feature so should only be allowed for small features eg; move
>constructors.

Are there any such issues? Our experience teaches us to be much more
cautious. However there is a way to add new features should we wish to
do so. It is called a normative addendum.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

David Abrahams

unread,
May 27, 2004, 12:00:42 PM5/27/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> On Wed, 26 May 2004 18:01:52 +0000 (UTC), Dietmar Kuehl
> <dietma...@yahoo.com> wrote:
>
>> My short-term memory is extremely bad but I can't remember having
>> met you at the last few committee meetings: with your claimed
>> efficiency you should be able to really speed things up. For some
>> non-obvious reason the progress made without your excellent help is
>> much slower. A possible reason may be that those investing in
>> standardization also have other stuff to do. Your accused
>> irresponsiblity of the standardization process is possibly due to
>> the fact that there are only few people who have the time to provide
>> quality papers on the various issues and move the working groups
>> through all those details swiftly.
>
> Oh don't get me wrong, I've never participated and I deliberately
> avoid having anything to do with groups like this (I've already been
> here longer than I intended). But in a way, that's precisely where the
> value in what I think comes from - I bring a fresh viewpoint direct
> from the coal-face which you may or may not already have enough of.
>
> As with any standardisation process, much of it will be driven by
> lowest common denominator, horse-trading between vested interests and
> a real paucity of independent funding. But then cooperation with
> mutually hostile interests will always be glacially paced

This is what you call a "fresh viewpoint?" To me it sounds
unbelievably jaded and cynical. Maybe you should break your personal
rule and come to a committee meeting to see how things actually work
before you make such judgements. Participating in the C++ committee
and in the relationships that I developed out of it has probably been
the the most rewarding professional experience of my life.

> What I am suggesting though is a "fast track" mechanism for getting
> new features which absolutely everyone agrees upon
> into the standard quickly (six months)

When we run across such a feature, maybe we'll have use for that
mechanism. AFAICT it hasn't happened so far. We do have technical
reports that allow us to "bless" features before officially
standardizing them.

Also, 6 months, being only the total time between committee meetings,
is a bit short to expect to give any feature full consideration, get
implementation experience, and draft appropriate specifications.
We've been working on starting a process for making progress between
meetings, similar to Boost's, but even then I think 6 months would be
too short for actually adding a new feature, no matter how much
agreement it got.

> rather than having to wait to be bundled with more contentious
> features. This carries dangers proportionate to the size of the
> feature so should only be allowed for small features eg; move
> constructors.

Wow, if you think that's really small, I'd hate to see a medium-sized
feature. I mean, it's not "templates" or anything, but move
constructors has lots of implications, and no universal agreement.
Note that nobody's even written a complete specification yet.

> I'm hardly the first to call for such a thing I'm sure - I'm merely
> saying "from the coal-face" that certain pretty obvious minor
> additions to the feature set are urgently required.

There's plenty of agreement on that, but the problem is that every
feature takes more work than you seem to think, and we only have so
much volunteer manpower at our disposal. People who "deliberately
avoid having anything to do with groups like this" but expect us to
hurry up and approve their favorite features aren't doing anything to
contribute to the solution.

> The powers that be can take that or leave it - after all, there is
> no viable alternative to C++ available in the area it covers so I
> nor anyone else really has a choice.

There are no powers that be; just a bunch of people who get together
every 6 months to try to make progress, and spend a lot of time having
email discussions and writing proposals in between meetings. You can
either participate and make a difference or you can expect us to
handle the things we have the time, energy, and expertise to work on.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

tom_usenet

unread,
May 27, 2004, 1:17:36 PM5/27/04
to
On Wed, 26 May 2004 04:15:06 +0000 (UTC), hin...@metrowerks.com
(Howard Hinnant) wrote:

>Indeed, too many options. ;-)
>
>> - Move objects with memcpy. This works fine when the object isn't
>> self-referencing (directly or indirectly), but is non-standard for
>> non-PODs of course. For the vast majority of "value" types (including
>> std::containers on implementations I know), memcpy works fine.
>
>Count Metrowerks node-based containers out. They've been self
>referencing for five or so years. I believe gcc 3.4 node based
>containers are now self referencing, pretty sure about std::list, not as
>sure about the others. I also believe Dinkumware's std::string is self
>referencing, but again I'm not positive about that. So be extra careful
>before you memcpy something you're not intimately familiar with. I
>might wager everyone's vector is movable with memcpy, but I'm not sure
>how many other std::containers you could say that about.

Ahh, I'm glad I haven't been using the memcpy trick then!

>From a quick glance and <xstring> in VC7.1, std::string isn't
self-referencing. They have:
union _Bxty
{ // storage for small buffer or pointer to larger one
_Elem _Buf[_BUF_SIZE];
_Elem *_Ptr;
} _Bx;

and choose which pointer to use based on the value of:
size_type _Myres; // current storage reserved for string

>I agree with Niall. Lack of move semantics is a /major/ drawback in
>today's C++.

I agree that the feature is needed. But we need to implement it, try
it out and only once the best solution has been found, standardise it.
I for one would prefer a proposal that included destructive move
semantics, which will in general will provide the highest efficiency
gains. I note that your proposal is purely for non-destructive
semantics at the moment.

Have you implemented rvalue references in MWCW, or at least in an
experimental internal version?

---

llewelly

unread,
May 27, 2004, 1:17:54 PM5/27/04
to
s_googl...@nedprod.com (Niall Douglas) writes:
[snip]
> Note that for every symbol which is available outside its DSO, a
> particularly expensive form of relocation must also be performed at
> load time which on all implementations I know of is quadratic to
> symbol number. See Ulrich Drepper's "How to write shared libraries".
[snip]

This is actually an argument for having symbols not visible by
default.

>
>>> This is partially where the speed & size differential between MSVC
>>> & GCC compiled binaries comes from. The trouble with "default
>>> shared" is that it encourages laziness - most programmers will do
>>> the minimum to get a working application and then stop. If it's
>>> "default private" then programmers must add the extra annotation to
>>> even reach a working binary - which is good, because code quality
>>> improves substantially.
>>
>> In C, to hide a symbol you make it static. In C++ we have the unnamed
>> namespace and access specifiers for classes; this might or might not
>> be enough to provide the information you need for faster binaries, so
>> it might prove helpful to address this in the standard along the lines
>> of "if you do such and such, the translator might be forced to assume
>> that and produce underperforming code".
>>
>> In other words, C/C++ programmers already take a visibility decision in
>> the code, using the language-provided means which the underlying
>> implementation is required to support.
>
> No you're wrong here. A static symbol is indeed hidden in that it
> cannot be directly referenced outside its compiland. This is not true
> for the contents of unnamed namespaces - they merely add extra
> mangling to the symbol which actually makes the problem worse (longer
> load & linking times and more code bloat).

This is true for current implementations, but does it *need* to be
true?

Why might an implementation need to make a symbol inside a unnamed
namespace visible at dynamic link-time?

> Why don't unnamed
> namespaces hide their contents? Because "export" causes searching of
> unnamed namespaces outside its compiland!

[snip]

A typical unix runtime linker has niether the smarts nor the
information availible to do the name lookup needed by exported
templates. Further, some experts have claimed it will probably
never be beneficial to instantiate exported templates at program
load time anyway.

So I think this is a bogus argument.

P.J. Plauger

unread,
May 27, 2004, 2:02:30 PM5/27/04
to
"Niall Douglas" <s_googl...@nedprod.com> wrote in message
news:opr8mjlc...@news.iol.ie...

For someone who's made a point of not understanding the standardization
process, you've certainly developed an elaborate, cynical, and flawed
vision of how it actually works.

> What I am suggesting though is a "fast track" mechanism for getting new
> features which absolutely everyone agrees upon into the standard quickly
> (six months) rather than having to wait to be bundled with more
> contentious features.

a) I don't agree about your features, so not everyone agrees.

b) We only meet once every six months.

c) It takes more than one meeting to prudently agree about anything.

> This carries dangers proportionate to the size of
> the feature so should only be allowed for small features eg; move
> constructors.

Glad you think it's small. It was first raised a couple of years ago
and I still don't see universal agreement even on what it means.

> I'm hardly the first to call for such a thing I'm sure - I'm merely saying
> "from the coal-face" that certain pretty obvious minor additions to the
> feature set are urgently required. The powers that be can take that or
> leave it - after all, there is no viable alternative to C++ available in
> the area it covers so I nor anyone else really has a choice.

Lots of things are pretty obvious to the bloke who works just one
coal face. Work a few more faces and you learn what's not so obvious.
Try to run a mine and you get humbled further still. Try to develop
mine safety standards and you get humbled even more. You, I believe,
have a large vein of humility before you, if only you can learn to mine
it, and benefit from it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

llewelly

unread,
May 27, 2004, 2:02:42 PM5/27/04
to
dietma...@yahoo.com (Dietmar Kuehl) writes:

> STL is C++'s strongest point - despite being wrong: the concepts STL is
> based on are to course-grained. This is my favorite complaint: iterators
> combine two distinct concepts, namely cursors for positioning and property
> maps for attribute access

And pairs of iterators are used to represent ranges, which are
arguable a third concept.

> (BTW, an implementation of algorithms based on
> property maps is available from <http://www.dietmar-kuehl.de/cool/>). I'm
> not aware of any other language which ships with an extensive library of
> algorithms being efficient while also being applicable to [nearly] arbitrary
> sequences (the restriction is due to being based on the wrong concepts, of
> course :-)

[snip]

llewelly

unread,
May 27, 2004, 5:11:01 PM5/27/04
to
da...@boost-consulting.com (David Abrahams) writes:
[snip]

>> What I am suggesting though is a "fast track" mechanism for getting
>> new features which absolutely everyone agrees upon
>> into the standard quickly (six months)
>
> When we run across such a feature, maybe we'll have use for that
> mechanism. AFAICT it hasn't happened so far. We do have technical
> reports that allow us to "bless" features before officially
> standardizing them.
>
> Also, 6 months, being only the total time between committee meetings,
> is a bit short to expect to give any feature full consideration, get
> implementation experience, and draft appropriate specifications.
> We've been working on starting a process for making progress between
> meetings, similar to Boost's, but even then I think 6 months would be
> too short for actually adding a new feature, no matter how much
> agreement it got.
[snip]

Somewhat related, in some ways Boost is a way of experimenting with
potential standard library features *before* they are
standardized.

Maybe Niall should experiment with Boost, and see how well it
fullfills his requirements for a 'fast track mechanism'; does
Niall need feature availablity, or does he need features chisled
into the granite of 14882?

Niall Douglas

unread,
May 27, 2004, 5:11:25 PM5/27/04
to
[Reposted after fixing moderator's reason for rejection]

On 26 May 2004 16:20:15 GMT, llewelly <llewe...@xmission.dot.com> wrote:

>> Local
>> functions would be easy enough to add, move constructors also though I
>> would have difficulties with modifying the optimiser to fully make use
>> of them. Of course the GCC masters aren't keen on extra-specification
>> extensions right now.
>
> Easy to add to the core language, maybe. Easy to add to all the
> appropriate spots in the standard library? I don't think so.

Given this (I agree), why isn't the core language spec released before the
library spec? I can see of course that the library gives a good testing of
a new language feature so one can backpedel if needs be - so I'd keep
synchronised releases for major updates. Bug fix and point releases could
be done independently so end-users could get on quickly with using
desperately needed minor features.

>>> A delete expression may choose the deallocation function (that's the
>>> user-definable operator delete) dynamicly, based on the runtime
>>> type of the deleted object. So a user-defined operator delete
>>> within a namespace would naturally apply to all *types* declared
>>> in said namespace, *not* to all delete-expressions in said
>>> namespace.
>>
>> I'm not sure I get you. Surely Koenig lookup takes care of finding
>> which operator delete to use?
>
> Not for types with virtual destructors. See 12.5/4:

> [snip]

This still doesn't seem to be a problem (if I am understanding correctly).
Does the static type have a vtable? If yes, does it have a virtual
destructor? If yes, does the vtable have an operator delete due to a
subclass defining one? If so use that, otherwise go with the usual rules
for a function lookup. None of this prevents or causes issue with the
compiler performing the usual Koenig followed by in-to-out scope search
for a suitable operator delete. The only difference between choosing which
operator delete to use as against a standard overloaded function is a
little bit of extra work must be done if the type has a vtable.

>> I quite like the "throws nothing" specifier. It's the only one I
>> use. All the others from my examination of the generated assembler
>> seem to generate type checking code which is inefficient.
>
> True, but not the only thing that is wrong with exception
> specifications.

Well I don't know anything about that. If a feature seems to cost more
than it gives to me I just don't use it. If it does give more but throws
stupid hurdles in the way, then I get frustrated. In that sense exception
specifications were done right because using either "throw()" (throws
nothing) or not using them at all doesn't come with unwanted side-effects.
True they may not have achieved what people thought they might, but then
they didn't balls up other stuff either. That to me reflects good design
(is "orthogonal" the right word here?)

>> One thing I really dislike (and I should have included it in my top
>> five, but I forgot) is what the standard mandates if you throw an
>> exception while you're in the middle of handling one. That means
>> programmer error causes an immediate exit of the process with no
>> chance to clean up (I don't view terminate_handler() as a realistic
>> way of properly cleaning up a large multithreaded application!)!
>
> If cleanup results in an error, can you safely clean up? The standard
> assumes not.

But that's singularly unhelpful! How can the standard possibly know what
you're doing with your classes (what if they all have trivial
destructors?)? Maybe you can or maybe you can't clean up, but the standard
just hard exits your process - like a SIGILL. I've taken the approach that
if the nested exception throw corrupts memory or fatally damages
application state, then so be it - then you get SIGILL or whatever and
your hard exit - but at least your code had some chance to wipe encryption
keys and flush files. The current standard's approach is throwing the baby
out with the bathwater.

> There are surely applications for which this isn't true, but I think
> they are a minority. I don't know if that minorty is important
> enough to justifty the work necessary to define and implement
> something smarter.

Well, most of that work can be obviated by plucking ideas off an existing
implementation a la Boost library. I'm sure C++ gurus would risk blood
clots when examining my sloppy C++, but that code is LGPL and I freely
offer explanatory support!

My issue with how it is right now is that if you have a secure or
mission-critical app then the standard is helping attackers or
catastrophic failure respectively right now (many of course would say such
applications should not be written in C++, but that didn't work for C so
why C++?). This is surely not something we want to encourage?

>>> I realize that those of you who never leave the bounds of Purified
>>> Natural Filtered Sanctified OO find the STL's favortism towards
>>> value types bizzare, confusing, and inappropriate, but those of
>>> us who don't have entity types falling out of ears generally
>>> appreciate it.
>>
>> Heh, I am one of the few programmers I know of who dislikes OO and
>> thinks it a bad thing.
>
> Obviously my biases are leading me to make false assumptions
> again. :-)

Oh don't worry, this happens to me all the time. Partially due to how I
deliberately frame posts in a "stir things up" style which most people
take as arrogance & naivity (which is intended).

>>> Again, it would be niced to see it improved in various ways, and nice
>>> to see it complemented by something that better supported entity
>>> types, but it shouldn't be removed or replaced.
>>
>> I'm afraid I don't know what an entity type is.
>
> An entity has identity. Even if you and I both have the same make,
> year, model, and options on a car, my car is not your
> car. Examples include bikes, shoes, ostreams, etc.
>
> An entity type is a type you instantiate to get an entity.
>
> An example of a non-entity type is an int. An int is a value type;
> when dealing with ints, any two ints which have the value 5 are
> the same.

<wince> This relabelling of old concepts with new fancy names seems to get
worse every year. It's probably a good thing for teaching first year
undergraduates as we used to interchange terms inconsistently before but
it does cause some of us who don't keep up surprise. I still don't really
know what a "visitor pattern" or "singleton" etc. precisely are, just that
I wouldn't call that code those names.

Cheers,
Niall

Niall Douglas

unread,
May 27, 2004, 7:28:54 PM5/27/04
to
On Thu, 27 May 2004 17:17:36 +0000 (UTC), tom_usenet
<tom_u...@hotmail.com> wrote:

>> I agree with Niall. Lack of move semantics is a /major/ drawback in
>> today's C++.
>
> I agree that the feature is needed. But we need to implement it, try
> it out and only once the best solution has been found, standardise it.
> I for one would prefer a proposal that included destructive move
> semantics, which will in general will provide the highest efficiency
> gains. I note that your proposal is purely for non-destructive
> semantics at the moment.

What on earth would be the point of a move constructor which *didn't*
destroy the original?

Cheers,
Niall

Howard Hinnant

unread,
May 27, 2004, 9:58:50 PM5/27/04
to
In article <opfbb0prdpb865spa...@4ax.com>,
tom_u...@hotmail.com (tom_usenet) wrote:

> I agree that the feature is needed. But we need to implement it, try
> it out and only once the best solution has been found, standardise it.

100% agreed. Working... ;-)

> I for one would prefer a proposal that included destructive move
> semantics, which will in general will provide the highest efficiency
> gains. I note that your proposal is purely for non-destructive
> semantics at the moment.

<nod> Go for it! :-) There's nothing in N1377 that prevents a
destructive move coexisting (placement destructor syntax perhaps?).
They don't need to compete and might be mutually beneficial. But reread:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1377.htm#Alterna
tive%20move%20designs

before you start so that you don't relearn the gotchas that are in there
the hard way. Imho, while non-destructive move is widely applicable,
destructive move would have a much narrower focus (useful nonetheless).
I.e. A destructive move constructor would be very handy in moving
objects from one buffer to another when resizing the buffer. But I see
little use for destructive move assignment, or destructive move
functions in the user interface of a class. For example:

vector<T>::push_back(T&&); // non-destructive move into vector.
// But no destructive move into vector
// desired.

BigObject t;
vector<BigObject> v;
v.push_back(move(t)); // do not want to destruct t, just move from it.



> Have you implemented rvalue references in MWCW, or at least in an
> experimental internal version?

Partially, and internal/experimental only for the moment. I would like
to take that partially up to full in the not too distant future, but
naturally I can't promise anything (schedules, priorities, etc.).

-Howard

Niall Douglas

unread,
May 27, 2004, 10:01:22 PM5/27/04
to
On Thu, 27 May 2004 16:00:42 +0000 (UTC), David Abrahams
<da...@boost-consulting.com> wrote:

>> rather than having to wait to be bundled with more contentious
>> features. This carries dangers proportionate to the size of the
>> feature so should only be allowed for small features eg; move
>> constructors.
>
> Wow, if you think that's really small, I'd hate to see a medium-sized
> feature. I mean, it's not "templates" or anything, but move
> constructors has lots of implications, and no universal agreement.
> Note that nobody's even written a complete specification yet.

I thought you had?

Regarding implications and taking move constructors as an example, what's
the impact?

1. Slight changes to the compiler
2. Some changes to the STL
3. Slight enhancements of the source of a test application

If one were to patch in support to a major compiler - GCC is handy - one
could have worked out all the implications and problems within weeks. One
needn't patch all of the SGI STL - just say vector<> and list<> which
would be enough for testing.

After getting yourself happy, circulate on the internet among people who'd
also really like the feature. After they test it, one can be pretty sure
one has all the angles covered. A bugzilla entry can work as a central
discussion point for all parties.

To qualify as a fast-tracked small feature, the ability to swiftly patch
it in and get widespread volunteer testing is paramount (ie; parallelising
debugging). In fact, one could view how I did the GCC visibility patch
that I originally arrived here for as a template - it's taken two months
overall and will be in GCC v3.5.

>> I'm hardly the first to call for such a thing I'm sure - I'm merely
>> saying "from the coal-face" that certain pretty obvious minor
>> additions to the feature set are urgently required.
>
> There's plenty of agreement on that, but the problem is that every
> feature takes more work than you seem to think, and we only have so
> much volunteer manpower at our disposal. People who "deliberately
> avoid having anything to do with groups like this" but expect us to
> hurry up and approve their favorite features aren't doing anything to
> contribute to the solution.

You know I'm no good with theoretical discussions, everyone just gets
confused (remember c++-sig!). I cannot contribute directly in any useful
fashion to the ISO C++ standardisation process as it currently works. I
could however go patch GCC to add move constructors though unfortunately
the GCC maintainers would not accept such a patch into the mainline as the
standard hasn't ruled on it yet.

If there were a fast-track mechanism based on normative addendums whereby
it worked via a volunteer patching in support to GCC, other volunteers
testing it and then the original volunteer writing up a report consisting
of:

1. Impact of the patch on other areas of C++ (existing source especially)
2. Cost of implementation (to GCC)
3. Benchmarks of improvements (before & after code)
4. A proposed diff of the standard adding the feature

.. then I could be much more useful. And not just me, I can see a lot of
people becoming interested because now they actually can DO something -
not by talking across a period of years and spending thousands of euros
visiting expensive international conferences but actually sitting down and
coding.

For this, the ISO C++ committee would need a website where volunteers can
coordinate (basically a bugzilla server). We'd also need a FREE copy of
the full standard to generate diffs against and much better electronic
presence of proposed amendments so people can foresee contraindications.
And lastly, some assurance that work done would be taken seriously at the
six month meetings and perhaps crediting of those who have contributed.

>> The powers that be can take that or leave it - after all, there is
>> no viable alternative to C++ available in the area it covers so I
>> nor anyone else really has a choice.
>
> There are no powers that be; just a bunch of people who get together
> every 6 months to try to make progress, and spend a lot of time having
> email discussions and writing proposals in between meetings. You can
> either participate and make a difference or you can expect us to
> handle the things we have the time, energy, and expertise to work on.

If you lament the lack of volunteers, consider how someone unemployed for
two years can possibly afford to attend committee meetings?

Consider the huge obstacles placed in the way of attracting volunteers:
total lack of immediacy (long times between proposal and acceptance),
primitive use of the internet and far too much theory and not enough
coding. People also feel deliberations go on in private as there is
precious little evidence in public - I've only ever seen committee reports
in (costly) trade magazines.

The mode of production of free software is ideal for minor new features to
the standard - it would also remove large amounts of work from you guys
letting you focus on more major features or ruling on contradictions etc.
I'd also imagine the maintainers of GCC would be very happy as they'd get
most of implementations of small new features well ahead of time.

Cheers,
Niall

Niall Douglas

unread,
May 27, 2004, 10:02:03 PM5/27/04
to
On Thu, 27 May 2004 17:17:54 +0000 (UTC), llewelly
<llewe...@xmission.dot.com> wrote:

> s_googl...@nedprod.com (Niall Douglas) writes:
> [snip]
>> Note that for every symbol which is available outside its DSO, a
>> particularly expensive form of relocation must also be performed at
>> load time which on all implementations I know of is quadratic to
>> symbol number. See Ulrich Drepper's "How to write shared libraries".
> [snip]
>
> This is actually an argument for having symbols not visible by
> default.

An *implementational* argument for having symbols not visible by default.
With a C++ hat on, one would say that it's the compiler & linker's problem
to most efficiently turn my C++ into the best binary possible. However
with an implementational hat on, one quickly realises that's far from
trivial without additional annotation in the source.

The original poster felt that the compiler & linker could do just fine
with symbols default public. I was illustrating that default public comes
with unacceptable costs and even with improved compiler technology it's
not going to change any time soon.

>> No you're wrong here. A static symbol is indeed hidden in that it
>> cannot be directly referenced outside its compiland. This is not true
>> for the contents of unnamed namespaces - they merely add extra
>> mangling to the symbol which actually makes the problem worse (longer
>> load & linking times and more code bloat).
>
> This is true for current implementations, but does it *need* to be
> true?
>
> Why might an implementation need to make a symbol inside a unnamed
> namespace visible at dynamic link-time?

Off the top of my head, if a type with a vtable were defined within the
unnamed namespace and then a pointer to it typeid()'d in some other
compiland the compiler would need the typeinfo to be available.

Therefore it probably needs to be true - it's certainly /easier/ if it's
true. Certainly it's true everywhere I've seen so far. If a linker were
more clever, it could speculatively hide symbols which weren't used but
that breaks if you use shared libraries where it must export everything
just in case.

Again, an argument for extra annotation of the source. Only the programmer
can really say if a symbol will be used externally or not.

Cheers,
Niall

Niall Douglas

unread,
May 27, 2004, 10:02:20 PM5/27/04
to
On Thu, 27 May 2004 21:11:01 +0000 (UTC), llewelly
<llewe...@xmission.dot.com> wrote:

> Somewhat related, in some ways Boost is a way of experimenting with
> potential standard library features *before* they are
> standardized.
>
> Maybe Niall should experiment with Boost, and see how well it
> fullfills his requirements for a 'fast track mechanism'; does
> Niall need feature availablity, or does he need features chisled
> into the granite of 14882?

While I'd like to see changes to the standard library, I would hardly
consider them urgent. All the changes I originally outlined were very
substantial changes and therefore not possible in a short time. If you
remember, I was speaking of complete redesigns.

Anyway besides, if a library annoys you you can fix it yourself. Language
(mis)features you can't fix :(

See my other post to Dave Abrahams for my ideas on how to streamline ISO
C++ standardisation and involve a lot more workers.

Cheers,
Niall

David Abrahams

unread,
May 27, 2004, 10:02:46 PM5/27/04
to
tom_u...@hotmail.com (tom_usenet) writes:

> I for one would prefer a proposal that included destructive move
> semantics, which will in general will provide the highest efficiency
> gains.

If you can figure out how to make use of destructive move work at all,
much less in the presence of exceptions, *and* you can prove that it
will improve efficiency, I'm all for it. Personally I have grave
doubts:

struct Foo
{
~Foo();
};

void f(Foo& x, bool y)
{
if (y) { Foo z(move(x)); }
}

int main(int argc, char**argv)
{
Foo a;
f(a, argc > 1);
// Do we generate a destructor here?
}

Ultimately you end up having to carry around an "exists" state bit
with every object. Not worth it, IMO.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

David Abrahams

unread,
May 27, 2004, 10:03:09 PM5/27/04
to
llewelly <llewe...@xmission.dot.com> writes:

>> I'm afraid I don't know what an entity type is.
>
> An entity has identity. Even if you and I both have the same make,
> year, model, and options on a car, my car is not your
> car. Examples include bikes, shoes, ostreams, etc.
>
> An entity type is a type you instantiate to get an entity.
>
> An example of a non-entity type is an int. An int is a value type;
> when dealing with ints, any two ints which have the value 5 are
> the same.

Not quite, at least not in C++. No two lvalues are precisely
equivalent, because they have detectable addresses that may differ.
It's actually a major problem that crops up when trying to define
almost *any* formal semantics for C++ code. There's really no
difference between two ints and two bikes in this respect.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

Davide Bolcioni

unread,
May 27, 2004, 10:05:20 PM5/27/04
to
Niall Douglas wrote:
> On Wed, 26 May 2004 02:46:35 +0000 (UTC), Davide Bolcioni
> <6805b...@sneakemail.com> wrote:

>> in other words, "gcc -c" might
>> produce an "unfinished" object file where this choice of access is
>> deferred, to be resolved when "gcc" is invoked for the final link step
>> just before handling .o files to the linker (nobody said you have to
>> give the linker exactly the same .o files you started with). Since the
>> decisions in this step are C++ specific, it is the compiler's job,
>> not the linker's.
>
> Not exactly. How ELF does shared libraries means that the app can load a
> shared library and symbols from that can overwrite symbols currently
> defined eg; printf() calls one implementation before and then calls a
> different one thereafter.

This seems to me a violation of the One Definition Rule of C++, so I
should think that: a) the compiler should not produce code causing such
to occur; b) if the programmer causes this to occur within the Makefile,
he had better know what he's doing; c) overwriting symbols is however
useful at times.

> This requires the compiler to always have at
> least the equivalent overhead of a virtual function call. Even with WPO
> you can't elide this - to do so breaks things unless you assume no new
> libraries will be loaded.

Supporting (c) indeed seems to require double indirection; however, I
recollect investigations from KDE people about C++ dynamic linking in
Linux showing that relocations, not indirection, were the main source
of overhead and suggesting prelinking as a solution.

If a member function is already virtual, this is a non issue unless the
implementation doubly indirects (I have not investigated this); if not,
then we have a performance problem if the function is not inline, yet
small enough that the overhead of the virtual call is significant, and
called often. I have seen it happen, although not in recent years, but
not very often (mostly constructors).

> Note that for every symbol which is available outside its DSO, a
> particularly expensive form of relocation must also be performed at load
> time which on all implementations I know of is quadratic to symbol
> number. See Ulrich Drepper's "How to write shared libraries".

Will do.

>> In other words, C/C++ programmers already take a visibility decision in
>> the code, using the language-provided means which the underlying
>> implementation is required to support.
>

> This is not true for the
> contents of unnamed namespaces - they merely add extra mangling to the
> symbol which actually makes the problem worse (longer load & linking
> times and more code bloat). Why don't unnamed namespaces hide their
> contents? Because "export" causes searching of unnamed namespaces
> outside its compiland!

I have not investigated the ramifications of "export" enough to comment
on this; more to come in the future.

>>> I also feel that whether something is visible outside its DSO/DLL is
>>> part of its API spec and thus interface contract. If you disagree
>>> with this, consider how public/protected/private relate to class
>>> design and note that similar logic applies to both.
>>
>> It seems to me that you are unwilling to abstract from the concrete
>> implementation; if what you're after is a further layer of
>> modularization, you might want to research "module" concepts such as
>> those found in Modula languages or the "package" concept of Java.

If I write that a member function is private, I mean exactly that; it
cannot be called from outside the class. The mangled symbol for its
implementation should not require double indirection to reach, once
WPO coalesces invocations from separate .o files; if the programmer
puts some implementations in a .so and some in another, double
indirection would be required and a comment in the standard should
warn of the associated performance cost. Most programmers would
not do that.

The above, however, still allows the programmer to overwrite symbols
through linker directives, or even by making the "mistake" of putting
the implementations he wants to override in a separate .so, causing
the compiler to use double indirection and all that.

If I write that said member function is protected, I am making an
entirely different statement: any member function of a derived class
can invoke it. The set of callers is open-ended and might include
plugins accessed with dlopen(); if the symbol is not visible, a
plugin could be written which is legal C++, compiles, but fails at
dlopen() time (this can happen today with some doctoring of the ELF
binary, unless I'm mistaken, but is clearly a bug).

The C++ language, by itself, does not allow you to directly restrict
the set of classes derived from a base class, although posting the
problem on comp.lang.c++.moderated might turn out a clever workaround;
if such a feature were necessary and introduced, I would certainly want
the error to occur at compile time, rather at the customer's site, and
be "your derived class is not on the list of classes allowed to derive",
not "symbol not found".

If I write public I mean public - anybody can call a public function
from anywhere, which is essentially analogous to the protected case
from the point of view of linking.

> I view DLLs/DSOs as as fundamental mechanism of compartmentalising code
> as classes.

This is my point: they are the mechanism, the implementation, while the
C++ language is the interface. The implementation must keep the promises
made in the interface, which the ELF linker does at the price of
substantial contortions; you're suggesting that the interface, the C++
language, be modified to allow faster linking; I'm objecting that your
suggested change to the interface makes for a poor interface (actually
this is an objection to n1496, as said previously, which introduces
"piecemeal modularization" irrespective of the default chosen).

>> a syntactic construct to address it on a symbol by symbol
>> basis is inappropriate.

My key objection is to "on a symbol by symbol basis", which is a critique
of n1496; the choice of "default hidden" would exacerbate the problem.

> Also I like how right now things can be subdivided
> one way at the source level (via namespaces) but a totally different way
> at binary level - to me, this ADDS value through intuitiveness.

I both agree and disagree; I agree that some latitude in the organization
of code at binary level is highly desiderable, but only as long as this
does not break the contract established by the C++ language.

> If you could show me more proposed syntax, I could say a lot more. While
> I'm initially negative, I could be very easily swayed as TBH I don't
> have enough experience of what you propose would mean.

Let's assume that the intent is to get rid of double indirection; it
seems to me that to achieve this you have to know, at WPO time, that
all references to a symbol in the program will use the definition from
the program itself; indirection will still be needed for accessing said
definition from shared objects and for accessing symbols referenced but
not defined in the program. You want to say "this definition cannot be
overridden" for proteced/public member functions, and for namespace
scope symbols.

One syntax to achieve this might be inspired from Java:

#include "sample_class.h" // Declaration.

package main_program; // Not a scoping construct.

class sample_class { ... } // Definition.
void f(const sample_class& arg) { ... } // Definition.

The implementation would label the symbols as belonging to a package
named "main_program" - this is the reason for having

package main_program;

rather than

package main_program { ... }

which might suggest that you can have multiple packages in the
same source file (on second thought, this might not be such a
bad idea). The package where main() appears would become the
executable program after linking; other packages would become
shared objects (actually shared objects candidates).

Symbols defined in a package are not available outside the
defining package unless explicitly imported

#include "sample_class.h" // Declaration.
package shared_object;
import class sample_class from main_program;

and

#include "sample_class.h" // Declaration.
package plugin;
import class sample_class from main_program;
import void f(const sample_class& arg) from main_program;

Please note that I say "not available" as I do not wish to suggest
further complications of C++ visibility rules; a symbol which is not
visible cannot be imported.

Importing a symbol states the intent to reference the existing
definition and allows the compiler to report a conflicting definition
as an error straight away. Importing a symbol which is already defined
is likewise immediately reported as an error.

Allowing multiple definitions of the same symbol in separate packages
would cause a violation of the ODR rule, but only if said packages
are ever collected together as a C++ program.

When the definition of a symbol is encountered but the symbol is not
imported, the compiler is allowed to assume that this is the definition
referenced throughout the package, an assumption it can exploit at WPO
time; a jump table would still be necessary, since other shared objects
might reference the definition.

When prelinking, conflicting definitions of the same symbol would be
detected and would cause a violation of the ODR; the same would happen
if a plugin attempted to redefine a defined symbol upon dlopen().

All things considered, the above seems a little too much for saving one
indirection from nonvirtual functions; maybe the module/package concept
can bring other benefits from C++, however.

Cheers,
Davide Bolcioni
--
Paranoia is a survival asset.

llewelly

unread,
May 28, 2004, 1:59:36 AM5/28/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> On Wed, 26 May 2004 18:01:52 +0000 (UTC), Dietmar Kuehl
> <dietma...@yahoo.com> wrote:
>
>> My short-term memory is extremely bad but I can't remember having met you
>> at the last few committee meetings: with your claimed efficiency you
>> should
>> be able to really speed things up. For some non-obvious reason the
>> progress
>> made without your excellent help is much slower. A possible reason may be
>> that those investing in standardization also have other stuff to do. Your
>> accused irresponsiblity of the standardization process is possibly due to
>> the fact that there are only few people who have the time to provide
>> quality
>> papers on the various issues and move the working groups through all
>> those
>> details swiftly.
>
> Oh don't get me wrong, I've never participated and I deliberately
> avoid having anything to do with groups like this (I've already been
> here longer than I intended). But in a way, that's precisely where the
> value in what I think comes from - I bring a fresh viewpoint direct
> from the coal-face which you may or may not already have enough of.
>
> As with any standardisation process, much of it will be driven by
> lowest common denominator,

If wg21 is really lowest common denominator, how on earth do you
explain the STL? People claim the STL is 'popular', but in my
experience, it was *not* popular prior to about 2000 or so -
and even today, it seems it is only popular amoung programmers
whose knowledge of C++ is substantially above average, and a long
way outside of 'lowest common denominator'.

> horse-trading between vested interests

I must admit I only read one or two papers from each mailing (no, I'm
not member, but most/all of the papers have been publicly
availible for a long time.), but I have seen little evidence that
'horse-trading' has much influence on committee decisions.

Can somebody point to an example? If so, did it result in a worse
standard?

> But then cooperation with
> mutually hostile interests will always be glacially paced and after
> all, now that C++ has clearly taken the crown of most popular
> platform-independent programming language from C

Can you cite numbers?

> there are in many
> ways more reasons for it to go slower rather than faster.
>
> What I am suggesting though is a "fast track" mechanism for getting
> new features which absolutely everyone agrees upon into the standard
> quickly (six months) rather than having to wait to be bundled with
> more contentious features. This carries dangers proportionate to the
> size of the feature so should only be allowed for small features eg;
> move constructors.

[snip]

Move constructors affect overloading. Overloading is one the most
complex areas of C++ (ISTR Daveed described it as 'worse than
templates')

Namespaces are a simple concept, but look at the mess that resulted
from them being introduced into the already complicated C++ name
lookup rules.

I believe there is a real danger something similar might happen with
move constructors. I am strongly in favor of move constructors,
and it would be great to get some real-world experience with them
soon, but I would prefer the comittee move cautiously in
standardizing them.

Niall Douglas

unread,
May 28, 2004, 1:00:10 AM5/28/04
to
On Thu, 27 May 2004 03:22:32 +0000 (UTC), Francis Glassborow
<fra...@robinton.demon.co.uk> wrote:

> namespace local_new {
> // contains its overrider or overload of operator new and delete
> }
>
> mytype * i_ptr = new mytype;

mytype's vtable:
[0]: flags (indicates vtable contents)
[1]: ptr to destructor
[2]: ptr to appropriate delete for new used to instantiate

> In file f2.cpp
>
> extern mytype * i_ptr;
>
> namespace local_new {
> delete i_ptr;
> }

Compiler knows mytype has vtable. Generates code which examines vtable of
i_ptr. Sees there is an operator delete in it at [2]. Calls [1] then [2]
which calls correct destructor & operator delete.

> is just one example of a multitude of possible problems. Unlike classes
> where the compiler sees all the declarations in the class definition
> there is no requirement that the compiler sees all the declarations in a
> namespace in any particular translation unit.

Since this dynamic calling of operator delete already requires a vtable
(as the standard requires), I still don't see this as a problem.

> When you are annoyed by something in C++ it is usually a good idea to
> discover why it is the way it is. We (I was part of the group whose
> responsibility included this part of the core language) very carefully
> considered issues of overloading and overriding (i.e. replacing)
> operator new and operator delete at namespace scope and concluded that
> with the C++ separate compilation model it was impossible to implement
> which is why it is not allowed.

With respect, and I know of your name from many years ago when you
published an article of mine in ACCU, unless I have missed something your
explanation does not make any sense. As soon as a vtable is available, you
can call any operator delete anywhere at all including one completely
unknown to the current compiland. It just requires an appropriately laid
out vtable.

Indeed, if maximising code efficiency and minimising vtable size were
important, one could annotate the class to indicate if it does or doesn't
need to store a pointer to the appropriate delete to use when newed. Of
course, with the machinery that will be required by the export keyword,
the pre-linker stage could determine that on its own.

Cheers,
Niall

Howard Hinnant

unread,
May 28, 2004, 1:00:26 AM5/28/04
to
In article <opr8oj2h...@news.iol.ie>,
s_googl...@nedprod.com (Niall Douglas) wrote:

> What on earth would be the point of a move constructor which *didn't*
> destroy the original?

Consider inserting into a vector in the case where there is sufficient
capacity to hold the insert. Some of the elements in the constructed
region of memory will have to be moved to the uninitialized region to
make room for the new elements to be inserted.

With copy semantics one copy constructs from the existing elements in
the vector to the uninitialized region. With move semantics you want a
nondestructive move construction to the uninitialized region. If you
instead perform a destructive move for this purpose then your vector is
in a state before the new elements are inserted such that there is a
region of memory internal to the vector's size that is uninitialized.
If any of the external elements throws while you are copy constructing
them into your "hole", you will be left with a vector which does not
meet its internal invariants. I.e. vector::insert will fail to achieve
basic exception safety. One might be able to work around this by move
constructing the vector back to its original state (assuming a nothrow
move constructor), but that would be most inefficient, both in
performance (not so important) and in code size (very important). Such
an operation requires basic exception safety, not strong exception
safety.

This is but one example.

Destructive move semantics has uses, but must be used with the same care
that one uses an explicit call to an object's destructor. In practice
such calls though useful, are relatively rare. Much more common is
needing to move from an object with static or auto storage (as opposed
to an object on the heap). If an object has static or auto storage, it
already has a well defined time to be destructed. If you move from it,
you need to leave it in a valid constructed state so that when existing
language rules destruct it, things will go smoothly.

N1377 briefly discusses destructive move semantics and some of the
problems associated with it. One of the biggest problems I see is
destructively move constructing an object made up of members or base
classes that also want to destructively move construct. During such an
operation you get into a state such that that part of your object is
destructed while another part is still trying to move itself. It is a
state very foreign to C++ as we know it today (compared to destructing
most derived parts first such that an object is always in a self
consistent state even as it destructs).

That's not to say that destructive move constructors would not be
useful. I do see a few benefits. But destructive move semantics in the
hands of a non-expert is a recipe for a crashing program. It would be
an advanced topic that beginner and intermediate level programmers
should not touch (like explicit calls to an object's destructor).

In contrast, the non-destructive move semantics outlined in N1377
describe a system that is so safe that beginners can start using it
without even realizing that they are using it (no syntax changes from
today). And it accomplishes 96% - 99% of the performance goals of
destructive move semantics (estimated).

So the real question is: Is introduction of destructive move semantics
(in addition to N1377) worth the risk in buggy code for achieving that
last few percent in performance? There's a large part of me that says
yes! C++ has always been the language to say: Here's the rope, do with
it what you want. Please try not blow up half the planet with the
nuclear device attached to the other end of that rope.

Another part of me says that the added benefit does not justify the
associated risk. So N1377 leaves the question of destructive move
semantics open, for somebody else to carry the torch. I will neither
support nor oppose such an effort until it is much more developed.

-Howard

Francis Glassborow

unread,
May 28, 2004, 11:38:56 AM5/28/04
to
In article <opr8oc5w...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>On Thu, 27 May 2004 03:22:32 +0000 (UTC), Francis Glassborow
><fra...@robinton.demon.co.uk> wrote:
>
>> namespace local_new {
>> // contains its overrider or overload of operator new and delete
>> }
>>
>> mytype * i_ptr = new mytype;
>
>mytype's vtable:
>[0]: flags (indicates vtable contents)
>[1]: ptr to destructor
>[2]: ptr to appropriate delete for new used to instantiate

1) You were originally writing about overriding/overloading operator new
and operator delete at namespace scope. Exactly how does a vtable help
with tracking in which namespace a variable was created:

mytype * i_ptr(0);

namespace local1 {
i_ptr = new mytype;
}

namespace local2 {
delete i_ptr;
}

2) a vtable is a property of a polymorphic class with one per class not
one per instance and so cannot be used to track the version of new that
is used to create an instance, only the version that the class as a
whole uses.

Note that non-polymorphic types do not have vtables.

Before pontificating on what C++ should provide please take time to
understand why it is the way it is today. Granted that there are
numerous places where it can be improved but it would be a mistake to
assume that features that seem superficially attractive were left out
just because we never thought about them. In this case we did think, and
think carefully. The cost of allowing new and delete operators to be
defined at namespace scope would be that every single data object
(including all the built in types, which would break C compatibility)
would need to track how they were created.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Francis Glassborow

unread,
May 28, 2004, 1:00:05 PM5/28/04
to
In article <opr8oc5w...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>With respect, and I know of your name from many years ago when you
>published an article of mine in ACCU, unless I have missed something
>your explanation does not make any sense. As soon as a vtable is
>available, you can call any operator delete anywhere at all including
>one completely unknown to the current compiland. It just requires an
>appropriately laid out vtable.

We do not seem to be talking about the same thing. The topic was
allowing user defined operator new and delete at namespace scope.
Checking back I note that the original suggestion was for these to only
apply to types declared in that namespace. However even with that
limitation it is still flawed:

1) Only a subset of all types have vtables.
2) vtables are about polymorphic functionality provided within a class
hierarchy for objects that belong to that hierarchy.
3) namespaces are re-openable so how does the compiler track whether or
not there is a namespace scope operator new that is applicable. Just as
an example, suppose that WG21 added an operator new to namespace std::
.. that addition would have to be visible in every header file that
defines a Standard Library type.

That is why it is fine to have operator new/delete provided at class
scope (even in a non-polymorphic class) because the type of the object
identifies how it can be dynamically created and so how it should be
dynamically destroyed. Either it was created in raw memory provided by a
class memory allocator or in memory provided by a global allocator.
Which is fully determined (apart from fancy footwork with placement new,
but in such cases the programmer has to tread very carefully at the
destruction site.)

Allowing definitions of operator new/delete at namespace scope breaks
this simplicity and introduces a level of complexity that cannot (IMO)
be justified by any possible benefits.

Please give some thought to the real problems in implementing this idea.
We did and concluded that we would have to special case overloading
operator new and delete by prohibiting their provision at namespace
scope. The decision was not taken lightly.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Francis Glassborow

unread,
May 28, 2004, 1:00:02 PM5/28/04
to
In article <opr8oc6g...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>Consider the huge obstacles placed in the way of attracting volunteers:
>total lack of immediacy (long times between proposal and acceptance),
>primitive use of the internet and far too much theory and not enough
>coding. People also feel deliberations go on in private as there is
>precious little evidence in public - I've only ever seen committee
>reports in (costly) trade magazines.

Then join ACCU where we provide a bi-monthly report on what is happening
at WG21 and WG14.

Join J16 (agreed that is not as cheap) you do not have to attend
meetings unless you want to be able to vote.

However be assured that the one thing we will not be doing is
continually adding or modifying C++. That way lies chaos (Java already
has that and the only way to ensure your Java code does what you think
it will is to ship the JVM along with the app.)

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Niall Douglas

unread,
May 29, 2004, 11:49:01 AM5/29/04
to
On Fri, 28 May 2004 02:05:20 +0000 (UTC), Davide Bolcioni
<f5xz...@sneakemail.com> wrote:

> Supporting (c) indeed seems to require double indirection; however, I
> recollect investigations from KDE people about C++ dynamic linking in
> Linux showing that relocations, not indirection, were the main source
> of overhead and suggesting prelinking as a solution.

Absolutely. If however every symbol access requires an address fetch in
order to determine where to fetch the data, you create very nasty pipeline
stalls on modern processors as that singularly defeats any prediction unit
in the CPU.

> If I write that a member function is private, I mean exactly that; it
> cannot be called from outside the class. The mangled symbol for its
> implementation should not require double indirection to reach, once
> WPO coalesces invocations from separate .o files; if the programmer
> puts some implementations in a .so and some in another, double
> indirection would be required and a comment in the standard should
> warn of the associated performance cost. Most programmers would
> not do that.

Thing is, it must remain publicly visible in case a friend class wants it.
And bear in mind that the linker rarely knows anything about C++ at all,
it just blindly assembles object files.

> The above, however, still allows the programmer to overwrite symbols
> through linker directives, or even by making the "mistake" of putting
> the implementations he wants to override in a separate .so, causing
> the compiler to use double indirection and all that.

Y'see, I would argue that the costs of being able to override symbols like
this aren't worth it given the amount of code that will ever use such a
feature. Far better such code does it explicitly so everyone else can run
much faster.

> If I write public I mean public - anybody can call a public function
> from anywhere, which is essentially analogous to the protected case
> from the point of view of linking.

What if it's public to classes within your DSO but not to anything outside?

>>> a syntactic construct to address it on a symbol by symbol
>>> basis is inappropriate.
>
> My key objection is to "on a symbol by symbol basis", which is a critique
> of n1496; the choice of "default hidden" would exacerbate the problem.

My concern is that what you seem to want would need a much more
intelligent linker. Linkers work in terms of symbols and thus why one
specifies visibility on a per-symbol basis. I'm not sure moving that way
is the right one.

>> Also I like how right now things can be subdivided one way at the
>> source level (via namespaces) but a totally different way at binary
>> level - to me, this ADDS value through intuitiveness.
>
> I both agree and disagree; I agree that some latitude in the organization
> of code at binary level is highly desiderable, but only as long as this
> does not break the contract established by the C++ language.

Bear in mind that C++ isn't the only linkable language - many languages
output ELF object formats some of which can even be directly linked with
C++ (though mostly it's just C). What you propose would suggest breaking
with such a capacity.

> Allowing multiple definitions of the same symbol in separate packages
> would cause a violation of the ODR rule, but only if said packages
> are ever collected together as a C++ program.

How do you handle a program loading an unknown library (package) at run
time?

> All things considered, the above seems a little too much for saving one
> indirection from nonvirtual functions; maybe the module/package concept
> can bring other benefits from C++, however.

Of course, one could simply mandate the Win32 method of doing shared
libraries and all these problems on ELF go away. ELF itself as a format
can handle it easily - it's just the semantics of working with shared
libraries need to change (in a non-breaking fashion with how it's
currently done).

Cheers,
Niall

Edward Diener

unread,
May 29, 2004, 11:50:05 AM5/29/04
to
tom_usenet wrote:
> On Mon, 24 May 2004 14:29:39 CST, Niall Douglas
> <s_googl...@nedprod.com> wrote:
>
>> I personally feel that when producing language standards as with any
>> standard, the format should be big release, then point 1 release and
>> point 2 release with point 2 released as the next big release is in
>> preparation. ISO C++ has not followed this, much to its detriment :(
>
> You appear to have missed the 2003 TC (or "point release"). There's
> also the library TR coming out fairly soon. So you have a "bug fix"
> point release and a new features point release. What more do you want?

A release that adds new language features for which many have been asking.
The 2003 TC appears as a much-needed clarification to a number of C++
language issues, but it does little as far as I can see to solve certain
language issues. The library TR is certainly appreciated, since I and many
other C++ programmers already use a number of the libraries, especially the
ones from Boost.

Before you ask specifically what language features, I think that there are a
number of ones for which there has been a general consensus that a solution
should be found, even if no consensus solution officially exists yet or
there are a number of highly regarded solutions being considered. An example
would be the forwarding problem for which a number of the leading Boost
developers have spoken eloquently. But that is one example and there are a
number more ( templated typedefs is another ).

I think that what is disappointing is the slowness by which the C++
committee moves to make helpful language changes. While I understand much of
the conservative nature of that slowness, I know of no other major language
which has a ten year span to make language changes. I know that there will
be many arguments why this time span is needed, including the standard two
about the work of the committee being freely given with no pay and the
invitation to be part of the committee if I want to see changes made. I am
not in any way denigrating the work or the integrity of the committe
members. But I see the slowness to add language features to C++ as a
detriment toward both attracting programmers to the language and as a means
of improving the language for the sake of the C++ programmers already using
it who are attempting to use language features to solve difficult problems.

Ten years in computer time is very long, and by the time C++ may be ready to
ease a programming problem through a needed language change, other languages
have already solved the problem long ago ( I am thinking of Python
especially although C# and Java are also faster about making changes than
C++, not necessarily of course every time to the benefit of those
languages ). I would like C++ to solve problems a little quicker, but then
again I don't see occasionally taking the wrong road and then correcting it
later on as such a disaster as C++ seems to consider it. As an example of
this, C++ exception specifications are generally considered pretty useless
as they have been specified in the language, but it is not a disaster and I
can see little reason that they can not be either dropped, improved, or
respecified in the future. Will it break some code ? Most probably. But the
C++ programming world will not come to an end if this happens and thousands
of C++ programmers will not throw up their hands in disgust, foment against
Mr. Stroustrup and the C++ commitee, and leave for other language pastures.
They will simply change their code for a hopefully better future. The idea
that all changes must be absolutely and perfectly correct the first time
around has led to a stasis in the language that I think is unnecessary.

The other issue I see with C++ language changes, with all due respect to Mr.
Stroustrup and his opinions, is the need to maintain backward compatibility
with C forever and forever. Can not this Gordian knot simply be cut and C++
declared its own language with backward compatibility with the C programming
language not important or wanted anymore ? I think so and, if the time has
not already come to do this, it will come very soon if C++ is to move
forward into richer and easier to use programming areas.

Niall Douglas

unread,
May 29, 2004, 4:53:53 PM5/29/04
to
On Sat, 29 May 2004 15:50:05 +0000 (UTC), Edward Diener
<eldi...@earthlink.net> wrote:

>> You appear to have missed the 2003 TC (or "point release"). There's
>> also the library TR coming out fairly soon. So you have a "bug fix"
>> point release and a new features point release. What more do you want?
>

> I think that what is disappointing is the slowness by which the C++
> committee moves to make helpful language changes. While I understand
> much of
> the conservative nature of that slowness, I know of no other major
> language
> which has a ten year span to make language changes. I know that there
> will
> be many arguments why this time span is needed, including the standard
> two
> about the work of the committee being freely given with no pay and the
> invitation to be part of the committee if I want to see changes made. I
> am
> not in any way denigrating the work or the integrity of the committe
> members. But I see the slowness to add language features to C++ as a
> detriment toward both attracting programmers to the language and as a
> means
> of improving the language for the sake of the C++ programmers already
> using
> it who are attempting to use language features to solve difficult
> problems.

I absolutely concur with this message.

Thing is, it's not like there isn't a lack of people willing to volunteer
time. I among many would be happy to contribute to fixing shortfalls in
the language. However how the C++ standardisation process is currently
laid out makes this very hard for us - why can't it use the methods free
software projects have shown so efficable and parallelise the "debugging"
part of new features? N1377 for one could be solved within weeks and could
technically become part of the standard within a year (everyone so far has
said "it needs more practical experience").

Cheers,
Niall

David Abrahams

unread,
May 30, 2004, 3:39:59 PM5/30/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> N1377 for one could be solved within weeks and could technically
> become part of the standard within a year (everyone so far has said
> "it needs more practical experience").

As I've said, it also needs a formal specification. And if you think
the contents of N1377 is an adequate formal spec., that just
reinforces my claim that your view of what constitutes
standard-readiness is terribly naive. I don't know who you think has
the time/ability to solve the problems of specifying the interactions
with overload resolution and filling in all the other holes "within
weeks", but I assure you that I can't do it that quickly, and I don't
know of anyone else who's prepared to.

Furthermore, there are real practical problems with a standard that
changes on a yearly basis, too many for me to list here, but I'm sure
you can google for the many discussions that have already happened.
One reason you probably won't find there is that, to avoid making a
worse mess of the language design, it's important to consider the
major new features together. It has happened more than once that
proposed features I thought would be great began to look bad or even
disastrous in the context of newer ideas/proposals.

As it happens, someone *is* working on a revised move/forwarding paper
that should get us closer to something that's ready to be accepted
into the WP. I also happen to think that these features are fairly
orthogonal to the other things we want to do, but that was far from
obvious until recent progress was made on varargs templates, and it's
still not equally obvious to everyone.

I'm only going to say this one more time: complaining that things
could be done faster doesn't do anything to actually get them done
faster. I have already mentioned some of the things you *could* do to
help make progress (hint: making suggestions from the sidelines about
what other people should do isn't one of them). This may be a waste
of time, but I encourage you to start mining the rich vein that PJP
referred to in a recent post, and apply yourself in some of these
ways... at least if you care about making a difference.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

llewelly

unread,
May 30, 2004, 9:50:58 PM5/30/04
to
eldi...@earthlink.net ("Edward Diener") writes:

> tom_usenet wrote:
>> On Mon, 24 May 2004 14:29:39 CST, Niall Douglas
>> <s_googl...@nedprod.com> wrote:
>>
>>> I personally feel that when producing language standards as with any
>>> standard, the format should be big release, then point 1 release and
>>> point 2 release with point 2 released as the next big release is in
>>> preparation. ISO C++ has not followed this, much to its detriment :(
>>
>> You appear to have missed the 2003 TC (or "point release"). There's
>> also the library TR coming out fairly soon. So you have a "bug fix"
>> point release and a new features point release. What more do you want?
>
> A release that adds new language features for which many have been asking.
> The 2003 TC appears as a much-needed clarification to a number of C++
> language issues, but it does little as far as I can see to solve certain
> language issues. The library TR is certainly appreciated, since I and many
> other C++ programmers already use a number of the libraries, especially the
> ones from Boost.

C++, unfortunately, is too complicated for simple features to be
added to the core language without potentially complex
fallout. Namespaces, a very simple concept, had much complex
fallout - why, because C++ name lookup was already
complicated. As far as C++ is concerned, small language changes
aren't.

[snip]

> I think that what is disappointing is the slowness by which the C++
> committee moves to make helpful language changes. While I understand much of
> the conservative nature of that slowness, I know of no other major language
> which has a ten year span to make language changes.

C comes to mind, as does Ada. I think Fortran has a similar
schedule. I think the overall picture is that young languages
change rapidly, and mature languages change slowly.

[snip]

> Ten years in computer time is very long, and by the time C++ may be ready to
> ease a programming problem through a needed language change, other languages
> have already solved the problem long ago ( I am thinking of Python
> especially although C# and Java are also faster about making changes
> than

python, java, (and also perl):

(0) Are not standarized the way C++ is.
(1) 95% of the users use implementations from a specific source.

(1) is also true for C#, AFAIK.

FWIW, when C++ was young, it changed at least as fast as any of these
languages - and it continued to change rapidly even when it was
much less young; look at all the standards comittee added to the
ARM; all of that was added after C++ was over 10 years old.

> C++, not necessarily of course every time to the benefit of those
> languages ). I would like C++ to solve problems a little quicker, but then
> again I don't see occasionally taking the wrong road and then correcting it
> later on as such a disaster as C++ seems to consider it. As an example of
> this, C++ exception specifications are generally considered pretty useless
> as they have been specified in the language, but it is not a disaster and I
> can see little reason that they can not be either dropped, improved, or
> respecified in the future. Will it break some code ? Most probably. But the
> C++ programming world will not come to an end if this happens and thousands
> of C++ programmers will not throw up their hands in disgust, foment against
> Mr. Stroustrup and the C++ commitee, and leave for other language
> pastures.

I have seen many people 'throw up their hands in disgust' and leave
C++ precisely becuase they felt it changed to much and / or too
quickly. In my experience, this is one of the primary reasons
people leaving C++ cite for leaving. Now, sometimes the gain is
worth the loss, but do not pretend no-one will leave.

> They will simply change their code for a hopefully better future. The idea
> that all changes must be absolutely and perfectly correct the first time
> around has led to a stasis in the language that I think is unnecessary.
>
> The other issue I see with C++ language changes, with all due respect to Mr.
> Stroustrup and his opinions, is the need to maintain backward compatibility
> with C forever and forever. Can not this Gordian knot simply be cut and C++
> declared its own language with backward compatibility with the C programming
> language not important or wanted anymore ?

Not yet. Here's why:

(0) Almost all large C++ projects use some C libraries.

(1) People still move from C to C++ by taking a large application
written in C, and making the necessary changes to make it build
a C++ implementation.

> I think so and, if the time has
> not already come to do this, it will come very soon if C++ is to move
> forward into richer and easier to use programming areas.

I bet C proggrammers still outnumber proggrammers in 'richer and
easier to use programming areas'.

David Abrahams

unread,
May 30, 2004, 9:52:53 PM5/30/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> I could however go patch GCC to add move constructors though
> unfortunately the GCC maintainers would not accept such a patch into
> the mainline as the standard hasn't ruled on it yet.

I'm not sure that's a foregone conclusion. GCC already contains
numerous extensions beyond the official C/C++ language standards. I
think the libstdc++-v3 developers might welcome the opportunity to use
move/forwarding, too.

> If there were a fast-track mechanism based on normative addendums
> whereby it worked via a volunteer patching in support to GCC, other
> volunteers testing it and then the original volunteer writing up a
> report consisting of:
>
> 1. Impact of the patch on other areas of C++ (existing source especially)
> 2. Cost of implementation (to GCC)
> 3. Benchmarks of improvements (before & after code)
> 4. A proposed diff of the standard adding the feature
>
> .. then I could be much more useful.

You don't need a "fast track mechanism" in order to pursue that
course. Demonstrating implementability and usefulness, and producing
an implementation for reference, would in itself have a huge impact on
getting any feature approved.

> And not just me, I can see a lot of people becoming interested
> because now they actually can DO something - not by talking across a
> period of years and spending thousands of euros visiting expensive
> international conferences but actually sitting down and coding.

You make it sound like coding is the "real work". Well, it's
important, but someone has to design the features (a job you've
clearly been underestimating) and generate some agreement for them.

> For this, the ISO C++ committee would need a website where volunteers
> can coordinate (basically a bugzilla server).

I don't see why the committee needs to manage this. Why don't you
just set up a sourceforge project? Or, do it in the context of GCC
development?

> We'd also need a FREE copy of the full standard to generate diffs
> against

I thought you wanted to work on code, not specifications (?)

Anyway, sorry, nobody has control over the cost of the standard ($18
in PDF form, or you can buy hardcopy from Wiley). ISO rules control
that.

> and much better electronic presence of proposed amendments
> so people can foresee contraindications.

What does "better presence" mean?!?
They're all posted at
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/

> And lastly, some assurance that work done would be taken seriously
> at the six month meetings

??

What kind of assurance do you _normally_ get that people will take
your results seriously before you start a project? You didn't ask
for assurance before posting here...

If your work is solid and relevant, it'll be taken seriously.
Otherwise, I guess it won't.

> and perhaps crediting of those who have contributed.

Sure, people are generally credited in the papers that they write.
Any proposal will surely reference any implementations and credit
those who did the work.

>> There are no powers that be; just a bunch of people who get together
>> every 6 months to try to make progress, and spend a lot of time having
>> email discussions and writing proposals in between meetings. You can
>> either participate and make a difference or you can expect us to
>> handle the things we have the time, energy, and expertise to work on.
>
> If you lament the lack of volunteers, consider how someone
> unemployed for two years can possibly afford to attend committee
> meetings?

I'm not the one doing the lamenting here. If you lament the "lack of
progress", consider how those of us actually trying to make progress
feel! There are other ways to make a difference than showing up at
meetings. You may notice that Peter Dimov is a co-author on the
move/forwarding papers. He lives in Bulgaria and has never been able
to come to a meeting.

I don't mean to sound callous, but the last person to make similar
chronic complaints -- that he couldn't make a positive contribution
because he couldn't afford to attend meetings because he lived in
Australia and he couldn't afford to leave the country -- didn't even
show up when we finally had a meeting in Sydney.

> Consider the huge obstacles placed in the way of attracting
> volunteers: total lack of immediacy (long times between proposal and
> acceptance)

It's not total. You can get feedback long before acceptance, which is
appropriate. No significant proposal is right when it arrives;
iteration is needed. But, yes, the feedback could be more immediate;
we're actually working on that problem.

> primitive use of the internet

That is, in fact, a hot topic of discussion in the committee. We
could *really* use someone with the time and expertise to improve our
networking technology. We need searchable archives for our mailing
lists, just for example. Would you like to volunteer to set up better
web resources for us?

> and far too much theory and not enough coding.

Not sure why, but I find that statement somewhat insulting. That
said, I think everyone on the committee agrees that it'd be a great
thing if we could more quickly get experience with actual
implementations of proposed features. The problem is that few people
actually have the time and expertise to implement them in an existing
compiler. So, if you want to make a contribution, and you have the
ability, implementing proposals in GCC would be a big help.

> People also feel deliberations go on in private as there is precious
> little evidence in public - I've only ever seen committee reports in
> (costly) trade magazines.

See Francis' answer. But yes, deliberations do go on "in private".
ISO rules IIRC.

> The mode of production of free software is ideal for minor new
> features to the standard - it would also remove large amounts of work
> from you guys letting you focus on more major features or ruling on
> contradictions etc.

Well, no, it wouldn't. It would however remove lots of uncertainty
and make it much easier to accept proposals quickly.

> I'd also imagine the maintainers of GCC would be very happy as
> they'd get most of implementations of small new features well ahead
> of time.

Sounds good to me. Why don't you get started?

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

Sergiy Kanilo

unread,
Jun 1, 2004, 2:04:49 PM6/1/04
to

"Francis Glassborow" <fra...@robinton.demon.co.uk> wrote in message
news:31WCzWJA...@robinton.demon.co.uk...

> In article <opr8oc6g...@news.iol.ie>, Niall Douglas
> <s_googl...@nedprod.com> writes
> >Consider the huge obstacles placed in the way of attracting volunteers:
> >total lack of immediacy (long times between proposal and acceptance),
> >primitive use of the internet and far too much theory and not enough
> >coding. People also feel deliberations go on in private as there is
> >precious little evidence in public - I've only ever seen committee
> >reports in (costly) trade magazines.
>
> Then join ACCU where we provide a bi-monthly report on what is happening
> at WG21 and WG14.

Could you give more datails about that. The ACCU Site states that

"Those interested in the C and C++ standardisation processes may join the
ISO Standards Development Forum. The ACCU provides these members with
access to current standardisation material and a bi-yearly standards
briefing letter ..."

Unfortunately, I can not find how to join ISDF, and only other information
about
ISDF is http://www.accu.org/isdf/public/ dated by January 98

Thanks,
Serge

Niall Douglas

unread,
Jun 1, 2004, 1:05:39 PM6/1/04
to
On Mon, 31 May 2004 01:52:53 +0000 (UTC), David Abrahams
<da...@boost-consulting.com> wrote:

>> I could however go patch GCC to add move constructors though
>> unfortunately the GCC maintainers would not accept such a patch into
>> the mainline as the standard hasn't ruled on it yet.
>
> I'm not sure that's a foregone conclusion. GCC already contains
> numerous extensions beyond the official C/C++ language standards. I
> think the libstdc++-v3 developers might welcome the opportunity to use
> move/forwarding, too.

I base that on what they said to me in response to suggestions I made. I
understand it - I'd do the same if I were them.

> You don't need a "fast track mechanism" in order to pursue that
> course. Demonstrating implementability and usefulness, and producing
> an implementation for reference, would in itself have a huge impact on
> getting any feature approved.

Of course I could just go do this - however would it be worthwhile in me
doing so? If I were going it alone then there's zero reason to expect
anyone else would participate and every reason to expect it would be
wasted work, especially given the GCC maintainers aren't keen on it
either. That's why such actions need official mandate eg; if a certain
country had sought UN approval before pressing ahead regardless they'd
have removed many of the extra hurdles they now face.

And besides, I was suggesting that the standardisation process overall
would be made more efficient by setting up a generic scheme like I
outlined to fast track the uncontentious additions. It needs to be
approved by the powers-that-be to work properly because that brings with
it instant legitimacy as an official arm of the standardisation process.
And legitimacy is everything here.

>> And not just me, I can see a lot of people becoming interested
>> because now they actually can DO something - not by talking across a
>> period of years and spending thousands of euros visiting expensive
>> international conferences but actually sitting down and coding.
>
> You make it sound like coding is the "real work". Well, it's
> important, but someone has to design the features (a job you've
> clearly been underestimating) and generate some agreement for them.

Ach, you totally misunderstand (but you're hardly alone). Go back and look
at my previous posts - I never once said that this process should be for
major features. It could only ever work for uncontentious features
everyone agrees are needed and even then, those whose approximate form is
already known ie; those features which need "implementational experience".

The higher level stuff should remain a committee led process. I'm talking
purely small "obvious to everyone" features - hence my use of
"fast-trackable" to denote them. I qualified what I meant by "fast track"
a number of times but various posters repeatedly either twisted my words
or ignored my explanations.

>> We'd also need a FREE copy of the full standard to generate diffs
>> against
>
> I thought you wanted to work on code, not specifications (?)

The people who do the code are surely the best placed to write the first
draft of changes given that they would know what's required the most?

> What kind of assurance do you _normally_ get that people will take
> your results seriously before you start a project? You didn't ask
> for assurance before posting here...

There is a huge difference between starting/forking a project and
contributing to an existing one.

> I don't mean to sound callous, but the last person to make similar
> chronic complaints -- that he couldn't make a positive contribution
> because he couldn't afford to attend meetings because he lived in
> Australia and he couldn't afford to leave the country -- didn't even
> show up when we finally had a meeting in Sydney.

Actually, after your last post before this one I was going to revanish
again having concluded further posts would be pointless. You all may not
realise this, but I really respect all you guys and I'm sorry I seemed to
have annoyed you all (but then that's precisely why I avoid groups like
this as I stated at the outset). OTOH, I have tried to suggest ways you
all could improve things from my point of view and have been singularly
told "either work through 'the system' or shut up". Well I'm sorry for
saying this, but that's not very open-minded - and while I may be wrong,
or be the 200th person to suggest the same, this angry "to question my
actions is to insult me" attitude reminds me of the unproductive academic
clique typical in most universities (please note Dave I am not saying this
of you specifically - I have the very highest of regard for you - but
generally of the majority of responses I have received).

Anyway, enough with the criticisms - I'm going to stop now. I wish you all
the best of luck and if you ever do enact a mechanism by which I could
usefully contribute, you'll see me again.

Cheers,
Niall

Gabriel Dos Reis

unread,
Jun 2, 2004, 12:56:21 AM6/2/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

[...]

| Of course I could just go do this - however would it be worthwhile in
| me doing so? If I were going it alone then there's zero reason to
| expect anyone else would participate

If you build it, they will come.

--
Gabriel Dos Reis
g...@cs.tamu.edu
Texas A&M University -- Computer Science Department
301, Bright Building -- College Station, TX 77843-3112

Francis Glassborow

unread,
Jun 2, 2004, 1:32:12 PM6/2/04
to
In article <yo0uc.3668$Yd3...@newsread3.news.atl.earthlink.net>, Edward
Diener <eldi...@earthlink.net> writes

>I think that what is disappointing is the slowness by which the C++
>committee moves to make helpful language changes. While I understand much of
>the conservative nature of that slowness, I know of no other major language
>which has a ten year span to make language changes.

Fortran, Cobol (and both are still actively used languages) C etc. It is
very unwise to tinker with a language before the last lot of changes
have been widely implemented. The reason some languages need such short
cycles of change is that they have to keep repairing the things they
broke the last time. Getting something wrong once is excusable, doing so
more than once isn't.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

David Eng

unread,
Jun 2, 2004, 2:44:36 PM6/2/04
to
da...@boost-consulting.com (David Abrahams) wrote:

>There are no powers that be; just a bunch of people who get together
>every 6 months to try to make progress, and spend a lot of time
>having email discussions and writing proposals in between meetings.
>You can either participate and make a difference or you can expect us
>to handle the things we have the time, energy, and expertise to work
>on.

I feel insulted with your response like "either work through the
system or just shut up" when people have some suggestions or questions
about C++. It's like when you go to a store to tell a sales person
that you want the TV have some features you wish, but the sales person
tells you that you either invent those features or just shut up.

C++ standard is a public domain and the users of C++ want C++ to be
more useful and powerful. If you know the C++ standard committee
cannot keep up with the demands from the users, why don't you and
committee make a change instead of doing this (the current standard
process you stated above) for ever? There is plenty of manpower if
you consider the Universities and research institutions. C++ standard
committee can set up some full time stuffs. I don't think there are
any problems to raise several millions dollar per year (consider
having a good C++ standard, how much money government can save). Let
the top Universities and research institutions involve the design and
let the PhD students to implement it (just remember that the Unix, C
and C++, network, X windows were invented by these Universities and
research institutions). This process is cheap yet you get the best
results. This is what NASA does. NASA has very limited research
capability. Most projects were done through Universities and research
institutions. NASA pays tuitions for PhD students so that Professors
have funding to do research and write papers and students are able to
get degree. I believe that C++ standard committee is able to do the
same thing, instead of only having a bunch of people who get together
every 6 months.

I agree that it shall be conservative to change the language, but it
shall be more aggressive to add or standardize libraries. For
example, there are dozen of thread library and hash table
implementations. But why take so long for C++ committee to
standardize these implementations? I also cannot believe the standard
committee doesn't have a plan to standardize a thread library in next
C++ extension. If this is because committee doesn't have time and
volunteers, please see my suggestion above. The standard process
shall be far more open and public, and allows more people involve but
not in the current format in which only dozen of people sit on the
committee asking for volunteer work. That will not work considering
complex of C++ and over million of C++ programmers. So, instead of
shut up those criticisms, you and C++ committee are better to consider
how to make C++ standard process more efficient to meet the
requirement from the users.

Loïc Joly

unread,
Jun 2, 2004, 3:10:08 PM6/2/04
to
Niall Douglas wrote:

[snip]


> If one were to patch in support to a major compiler - GCC is handy -

[snip]

It seems from various discussion that gcc is often considered as the
platform of choice to experiment with the C++ core language. I do not
think so.

Even if its GNU status make it really available, it is a production
compiler, and as such, it has to tackle many elements unnecessary for
experimentation. Several programming languages, a very wide range of
supported OS, a source code in C (I don't know why), a CVS process
designed for stability, an optimized compilation process...

So, here come my question : Does anyone have another contender ? What I
look for is :
- A compiler not micro optimised
- A compiler written in C++, and in "good" C++
- A compiler with source code available
- A compiler that does not care too much about exotic platforms (even if
at least two supported platform seems a minimum)


Does this compiler exist ?

--
Loïc

David Abrahams

unread,
Jun 2, 2004, 3:12:07 PM6/2/04
to

===================================== MODERATOR'S COMMENT:

This discussion is getting a bit heated. This article is OK, but let's
all please try to deescalate and to keep the discussion from becoming
personal.


===================================== END OF MODERATOR'S COMMENT
davide...@yahoo.com (David Eng) writes:

> da...@boost-consulting.com (David Abrahams) wrote:
>
>>There are no powers that be; just a bunch of people who get together
>>every 6 months to try to make progress, and spend a lot of time
>>having email discussions and writing proposals in between meetings.
>>You can either participate and make a difference or you can expect us
>>to handle the things we have the time, energy, and expertise to work
>>on.
>
> I feel insulted with your response like "either work through the
> system or just shut up" when people have some suggestions or
> questions about C++.

Excuse me, but I think you owe me an apology for implying that I would
say that, or anything like it. I never said _anything_ even a bit
like that.

I also have to wonder where this accusation originates. It sounds
suspiciously like something Niall recently posted:

OTOH, I have tried to suggest ways you all could improve things from
my point of view and have been singularly told
"either work through 'the system' or shut up". Well I'm sorry for

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


saying this, but that's not very open-minded - and while I may be
wrong, or be the 200th person to suggest the same, this angry "to
question my actions is to insult me" attitude reminds me of the
unproductive academic clique typical in most universities (please
note Dave I am not saying this of you specifically - I have the very

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


highest of regard for you - but generally of the majority of
responses I have received).

I disagree with Niall almost completely: I don't think he got a
response like that from anyone here, but at least he had the decency
to make it clear that he wasn't claiming *I* had said those things.
Are you sure you didn't read his post and mistake his words for mine?

> It's like when you go to a store to tell a sales person that you
> want the TV have some features you wish, but the sales person tells
> you that you either invent those features or just shut up.

Excuse me, but are you going to hand me money to get these features
into the standard? If not, there is no similarity -- even if I had
told someone to "invent feature or shut up" (which I did not)!

We at Boost Consulting, for example, would be very glad to take
contracts for development of certain C++ features and promoting their
standardization. In the meantime, we are volunteers in the C++
standardization effort and reserve the right to decline to work on
anything we don't have time for. We will, however, helpfully guide
those who say they want to make a difference in C++ standardization.

> C++ standard is a public domain and the users of C++ want C++ to be
> more useful and powerful. If you know the C++ standard committee
> cannot keep up with the demands from the users, why don't you and
> committee make a change instead of doing this (the current standard
> process you stated above) for ever?

Because

a) we don't have the resources to make the requested change by
ourselves.

b) the requested change might serve the demands of a few people
who have posted here, but (in my judgement and in that of a few
others at least) the community as a whole would be ill-served.

But mostly because of a). I think that _some_ of the things people
have asked for would serve the community well, if they could be set
up.

> There is plenty of manpower if you consider the Universities and
> research institutions. C++ standard committee can set up some full
> time stuffs.

No, we can't.

> I don't think there are any problems to raise several millions
> dollar per year (consider having a good C++ standard, how much money
> government can save).

You must be joking. Do you have the time and expertise to raise
several million dollars per year? If not, why do you assume that I
do? Am I being trolled here?

> Let the top Universities and research institutions involve the
> design and let the PhD students to implement it (just remember that
> the Unix, C and C++, network, X windows were invented by these
> Universities and research institutions).

Long before standardization. You

know, what you're describing is very much like the open-source
development process, where the mantra from maintainers to those who
want major changes that the maintainer has no time to implement is,
"patches welcomed". It's hard to understand why anyone would find the
same response insulting in the context of C++ standardization.

> This process is cheap yet you get the best results. This is what
> NASA does. NASA has very limited research capability.

And comparatively enormous funding.

> Most projects were done through Universities and research

> institutions. NASA pays tuitions for PhD qstudents so that


> Professors have funding to do research and write papers and students
> are able to get degree. I believe that C++ standard committee is
> able to do the same thing, instead of only having a bunch of people
> who get together every 6 months.

We already have a significant representation in research institutions
and universities. Among them, Bjarne Stroustrup and Gabriel Dos Reis
(Texas A&M), Jaakko Jarvi, Jeremy Siek, and Andrew Lumsdaine (Indiana
University), and Douglas Gregor (RPI). There is no major barrier to
entry for University researchers; on the contrary, it is easier in
many ways for them to participate than people outside academia.

That said, you might be surprised at how little interest there is in
C++ among computer science researchers in general, mostly because of
"aesthetic concerns".

> I agree that it shall be conservative to change the language, but it
> shall be more aggressive to add or standardize libraries.

Interesting choice of language. I don't know if you're a native
English speaker or not, but you should know that it comes off
sounding like a "royal decree" when you use "shall" that way. Maybe
you mean "should"?

> For example, there are dozen of thread library and hash table
> implementations. But why take so long for C++ committee to
> standardize these implementations?

In part because it takes that long to get the standard right. But
there are other reasons.

> I also cannot believe the standard committee doesn't have a plan to
> standardize a thread library in next C++ extension.

Believe it.

> If this is because committee doesn't have time and volunteers,
> please see my suggestion above.

That's a very simplistic view of what I think the reasons are, but
it's basically on target. We also don't much time or many volunteers
to go out and seek volunteers, set up new bureaucratic processes,
implement new web applications, maintain servers, etc, etc.

> The standard process shall be far more open and public

^^^^^


> and allows more people involve but not in the current format in
> which only dozen of people sit on the committee asking for volunteer
> work.

Nobody's _asking_ for volunteer work. I'm saying, "if you want
something to happen, volunteer to do something about it. If not,
other people will do what they have time for". I don't care whether
you volunteer or not -- OK, I care a little: more contributors doing
useful work would help.

> That will not work considering complex of C++ and over million of
> C++ programmers. So, instead of shut up those criticisms, you and
> C++ committee are better to consider how to make C++ standard
> process more efficient to meet the requirement from the users.

As I mentioned in other responses to Niall, we are actually working on
ways to make it more efficient. In fact *I* in particular have
expended a great deal of time and energy in that quest. If you had
any idea how difficult it can be to get a committee to even agree on
informal processes for making progress between meetings, you'd be a
lot more sympathetic to our efforts.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

James Kuyper

unread,
Jun 2, 2004, 5:29:24 PM6/2/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr8vnkj...@news.iol.ie>...

> On Mon, 31 May 2004 01:52:53 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
..

> > You don't need a "fast track mechanism" in order to pursue that
> > course. Demonstrating implementability and usefulness, and producing
> > an implementation for reference, would in itself have a huge impact on
> > getting any feature approved.
>
> Of course I could just go do this - however would it be worthwhile in me
> doing so? If I were going it alone then there's zero reason to expect
> anyone else would participate and every reason to expect it would be
> wasted work, especially given the GCC maintainers aren't keen on it

If you're so sure it's a good idea, why are you assuming it will be
impossible to convince them that it's a good idea? You only need to
convince one person (yourself) to create a reference implementation of
a new idea. Once it exists, it can be used ot convince others that the
idea is actually worth-while.

> And besides, I was suggesting that the standardisation process overall
> would be made more efficient by setting up a generic scheme like I
> outlined to fast track the uncontentious additions. It needs to be

As someone has pointed out, uncontentious additions are too few to
justify creating a special process for fast-tracking them. A
fast-track approach won't really save much time for such additions,
because if they truly are uncontentious, even the slow-track process
currently in use will process them reasonably quickly. It's the
contentious changes that take the most time to process.

Niall Douglas

unread,
Jun 3, 2004, 1:25:47 PM6/3/04
to
On Wed, 2 Jun 2004 19:12:07 +0000 (UTC), David Abrahams
<da...@boost-consulting.com> wrote:

> a) we don't have the resources to make the requested change by
> ourselves.
>
> b) the requested change might serve the demands of a few people
> who have posted here, but (in my judgement and in that of a few
> others at least) the community as a whole would be ill-served.
>
> But mostly because of a). I think that _some_ of the things people
> have asked for would serve the community well, if they could be set
> up.

I really don't know if it's worth me trying to explain what I am saying
again. I did point some friends of mine at this group to see if I was
being unreasonable and some said that I wasn't taking enough account of
cultural differences - that when you expert guys write what you do you
think you're saying something quite different to what it appears to me
when I read it and vice versa. Hence both sides are getting inflamed by
the other's comments - I certainly have read the replies to my views and
found them most unproductive - it seemed very much to me like I was being
shouted down and being effectively told "you're a child, you have nothing
useful to contribute here".

We can all agree I think that we'd all like the standardisation process to
move more quickly. In particular, I and some of the experts would agree
that more testing of proposed features "out in the wild" is especially
desireable. The most recent friction has arisen due to strongly differing
opinions on how to add manpower to the process.

From my understanding of what the experts think: Volunteers should
subscribe to industry magazines and should attend where possible the six
monthly conferences. They should participate in creating of amendments to
the standard either through writing one themselves either alone or in
conjunction with others and through reviewing, pondering and discussing
amongst themselves any pitfalls or contraindications which may arise both
within each proposal and between proposals.

What I've been saying: The previously outlined (and traditional) process
is inefficient because it makes no use of the large body of programmers
out there ready to volunteer their time to "patching" the standard just
like any piece of free software. Such people have little wish to involve
themselves in the intricaces of the standard, the standardisation process
nor even this newsgroup and most especially physically attending
conferences. They want short duration quick in & out self-contained
mini-projects. While they aren't adverse to writing prose, they'd far
prefer to write code.

We know from complexity theory & software management theory that debugging
software, unlike any other part of generating software, is very
parallelisable. If you view "C++" as a two dimensional graph where the
left is the ISO C++ standard text and the right is the implementation of
the same in code, one quickly realises that small obvious features are
nearly pure software.

Therefore the debugging of same can be parallelised.

Therefore if more "out in the wild" experience is needed, this is an area
ripe for improvement by applying the same methodologies free software
production methodologies have shown so successful. Obviously how well such
methodologies work is directly proportional to how far "right" the subject
matter is on the left-right graph - if it gets too far left, it'll break
down and become useless.

Hence me suggesting it's only suitable for small features which everyone
agrees on the general form thereof ie; very right side of the graph. And
small, so they have the least interaction with other features and thus
unwanted knock-on effects.

> You
> know, what you're describing is very much like the open-source
> development process, where the mantra from maintainers to those who
> want major changes that the maintainer has no time to implement is,
> "patches welcomed". It's hard to understand why anyone would find the
> same response insulting in the context of C++ standardization.

Thing is, the ISO C++ standard is not a piece of software. Applying what
you yourself and others suggested of "go do it yourself" can be shown
historically to be most inefficable - for one thing, I have no ability to
fork because forking is totally anti the whole point of having a standard
(ie; most of the "value" of C++ is that it's *standard*). If one reads
economic theory on information (I suggest "Information Rules" by Shapiro
and Varian), it's clear that ad-hoc dispersed development of standards is
a real no-no as indeed ISO's own guidelines suggest - there MUST be a
centralised authority on all matters.

That's why I see no point in working alone on this, or indeed anything
less than a substantial number of people being involved. Standards are
valuable because they increase interoperability & predictability in an
uncertain world which lowers costs & everyone benefits. Anything which
works against that will ossify as it's against the natural order of things
- think of it like an economic environment where making a profit kills
your company (eg; like a department in a university - if it doesn't go
over budget, it will have its budget cut).

>> If this is because committee doesn't have time and volunteers,
>> please see my suggestion above.
>
> That's a very simplistic view of what I think the reasons are, but
> it's basically on target. We also don't much time or many volunteers
> to go out and seek volunteers, set up new bureaucratic processes,
> implement new web applications, maintain servers, etc, etc.

The whole point of what I am saying is that by investing work now you can
save yourselves oodles of work in the future ie; you must spend money to
save money. Now I can't make you all do this - so far it would seem you
all are totally opposed to any possibility of change whatsoever and won't
even tolerate discussion about it - but then change is hard.

Also, you guys must yourselves invest this time - if you try and push it
off onto other people then it loses legitimacy. Your names are half of
what makes the C++ standard what it is - I can count off the top of my
head at least ten experts by name simply because they have written
articles in CUJ or MSDN - and when they themselves do something it is like
a nexus and with them everyone else turns. The name "Niall Douglas" thus
far means nothing to anyone and what I do has no effect on others
whatsoever - hence me doing something as you all have proposed is
pointless.

> As I mentioned in other responses to Niall, we are actually working on
> ways to make it more efficient. In fact *I* in particular have
> expended a great deal of time and energy in that quest. If you had
> any idea how difficult it can be to get a committee to even agree on
> informal processes for making progress between meetings, you'd be a
> lot more sympathetic to our efforts.

You won't know this, but I know precisely what it's like. I participated
extensively within my university's bureaucracy during my degree as part of
the student union, within national civil rights meetings and my father is
extensively involved within one of Ireland's largest universities and thus
he is well embedded within the Irish government. In fact, I've just spent
these last three days discussing how to get some legislation through the
Irish Parliament.

I'm not saying that a magic wand can be waved and it'll all become better.
I am saying that by some investment of time now many of the small obvious
features can be outsourced both soon and for the foreseeable future - you
could get maybe a 10% efficiency improvement directly but you'd also get a
lot more happy programmers out there as they'd finally have a mechanism by
which they could straightforwardly contibute to fixing their own pet
hates. And salving that feeling of helplessness would be very good for
karma.

Cheers,
Niall

llewelly

unread,
Jun 3, 2004, 1:27:04 PM6/3/04
to
Loïc Joly <loic.act...@wanadoo.fr> writes:

> Niall Douglas wrote:
>
> [snip]
>> If one were to patch in support to a major compiler - GCC is handy
>> -
> [snip]
>
> It seems from various discussion that gcc is often considered as the
> platform of choice to experiment with the C++ core language.

By who? Sure, there are many posters who say 'GCC is handy', but how
many people actually use it to implement C++ extensions?
Few. Also, note the current maintainers of g++ are more opposed
to extensions than not.

> I do not
> think so.
>
> Even if its GNU status make it really available, it is a production
> compiler, and as such, it has to tackle many elements unnecessary for
> experimentation. Several programming languages, a very wide range of
> supported OS, a source code in C (I don't know why),

ISTR the decision to use C was made in 1987. At that time, the C++
community was growing exponentially, but it was much smaller than
the C community. C implementations were availible for more
platforms, and were more consistent. I don't know if C++ was
seriously considered, but if GCC had been written in C++
originally, it would have been more difficult to port, and
had a smaller audience of potential contributors.

Even now, C89 implementations are much more consistent than C++
implementations.

Until very recently, (and perhaps even now) the majority of potential
developers knew C and not C++. Even now, most acutal GCC
developers, from what I can see, know C substantially better than
C++.

> a CVS process
> designed for stability, an optimized compilation process...

[snip]

What do you mean by 'optimized compilation process'?

Matt Austern

unread,
Jun 3, 2004, 1:27:25 PM6/3/04
to
davide...@yahoo.com (David Eng) writes:

> C++ standard is a public domain and the users of C++ want C++ to be
> more useful and powerful. If you know the C++ standard committee
> cannot keep up with the demands from the users, why don't you and
> committee make a change instead of doing this (the current standard
> process you stated above) for ever? There is plenty of manpower if
> you consider the Universities and research institutions. C++ standard
> committee can set up some full time stuffs. I don't think there are
> any problems to raise several millions dollar per year (consider
> having a good C++ standard, how much money government can save).

One thing you might not realize is that the C++ standardization
committee's budget is less than a few million dollars a year. A whole
lot less, in fact. The budget is zero. Committee members pay their
own expenses to attend meetings, and the costs of the meeting itself
(renting the meeting space, setting up the network, etc.) are paid for
by compaines that volunteer to host the meeting. The committee has no
paid employees.

If the standards committee had a seven-figure budget and a full-time
staff, then a lot of things might be different. Not necessarily
better, mind you, but certainly different.

Joerg Richter

unread,
Jun 3, 2004, 1:28:51 PM6/3/04
to
> Does this compiler exist ?

Perhaps LLVM has something in common what you are looking for.
http://llvm.cs.uiuc.edu/

Joerg

Terje Sletteb?

unread,
Jun 3, 2004, 3:28:10 PM6/3/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr8vnkj...@news.iol.ie>...
> On Mon, 31 May 2004 01:52:53 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
>
> > You don't need a "fast track mechanism" in order to pursue that
> > course. Demonstrating implementability and usefulness, and producing
> > an implementation for reference, would in itself have a huge impact on
> > getting any feature approved.
>
> Of course I could just go do this - however would it be worthwhile in me
> doing so? If I were going it alone then there's zero reason to expect
> anyone else would participate and every reason to expect it would be
> wasted work, especially given the GCC maintainers aren't keen on it
> either.

You don't need any official mandate for this. If I were you, I'd first
tried to start a discussion about the proposed feature (or simply
starting from an existing proposal), and that way seen if there was
sufficient interest, and if there's some serious objections to it, as
well as perhaps getting a sufficiently detailed description.

Then, having done that, you could evaluate whether you felt there was
enough interest in it, to, if possible, do a proof-of-concept
implementation in GCC.

Take the Boost libraries as an example: They have no official mandate
with regards to the C++ committee, and yet, many of the libraries have
been accepted in the C++ Library TR.

> The higher level stuff should remain a committee led process. I'm talking
> purely small "obvious to everyone" features - hence my use of
> "fast-trackable" to denote them.

The problem with this is that there's hardly any "obvious to everyone"
features, at least when you get down to the detailed specification of
it. Take something like "typeof", which appears obvious, and yet
there's been much discussion about it: Should it return a reference if
the value is a reference, etc.

If you're interested, have a look in "The Design and Evolution of C++"
to see what costs even a "perfect" proposal may have (even if it's not
breaking any code, etc.). One important thing is that it should fit
well with the rest of the language.

> > I don't mean to sound callous, but the last person to make similar
> > chronic complaints -- that he couldn't make a positive contribution
> > because he couldn't afford to attend meetings because he lived in
> > Australia and he couldn't afford to leave the country -- didn't even
> > show up when we finally had a meeting in Sydney.
>
> Actually, after your last post before this one I was going to revanish
> again having concluded further posts would be pointless. You all may not
> realise this, but I really respect all you guys and I'm sorry I seemed to
> have annoyed you all (but then that's precisely why I avoid groups like
> this as I stated at the outset). OTOH, I have tried to suggest ways you
> all could improve things from my point of view and have been singularly
> told "either work through 'the system' or shut up".

I can't believe this is what you've been told. I see from Dave's
posting numerous suggestions for how you might contribute, and it's
sincerely meant. I mean, there's a lot of stuff the committee has on
its agenda, so people willing to help is understandably very welcome.

The response in the thread might instead have been more one of "put up
or shut up." In other words, don't complain that you can't contribute,
and make a difference: You can!

I haven't - at least from this thread - seen a single shred of
evidence that roadblocks have been put in your way of contribution. On
the contrary; it's been welcomed.

Therefore, I can understand it feeling frustrating and perhaps
insulting, for hard-working people in the committee and the community
in general, when they get accused of not doing enough to facilitate
people helping, something that would be most welcome.

Frankly, it sounds a little like an excuse for not doing anything with
what you criticise, or want to improve.

> Well I'm sorry for saying this, but that's not very open-minded

I'd agree if that has been expressend, but as I said, I can't believe
that has been the intention. I get the opposite impression.

> - and while I may be wrong,
> or be the 200th person to suggest the same, this angry "to question my
> actions is to insult me"

I haven't seen this, either.

> attitude reminds me of the unproductive academic
> clique typical in most universities (please note Dave I am not saying this
> of you specifically - I have the very highest of regard for you - but
> generally of the majority of responses I have received).

I suggest you may have misunderstood, and taken honest, constructive
criticism for personal criticism. I can assure you, that for any
serious, relevant proposal, there will be a _lot_ of discussion and
criticism, basically shaking it to its roots, to see if it's solid.
The same thing happens at Boost: the participants tear the proposed
libraries to shreds, and the library authors love them for it. :) It
makes the libraries so much better, whether or not they get accepted.

In short, it's important that oneself, as you suggest, have an open
mind, too, and look for the best in people, taking the criticism as an
honest feedback and contribution.

> Anyway, enough with the criticisms - I'm going to stop now. I wish you all
> the best of luck and if you ever do enact a mechanism by which I could
> usefully contribute, you'll see me again.

I haven't seen what you may feel preventing you from contributing (you
don't need GCC approval to implement a proof-of-concept), so the way I
see it is that it's possible to start here and now.


Regards,

Terje

Loïc Joly

unread,
Jun 3, 2004, 3:35:18 PM6/3/04
to
llewelly wrote:

> Until very recently, (and perhaps even now) the majority of potential
> developers knew C and not C++. Even now, most acutal GCC
> developers, from what I can see, know C substantially better than
> C++.

I do not know if the expression translates well in English, but I would
say in French that this is the story of the hen and the egg.

Since GCC is written in C, people who know C++ and not C (I'm part of
them) do not participate in GCC, thus GCC is only designed by C people,
so it should be developped in C.

;)

>>a CVS process
>>designed for stability, an optimized compilation process...
>
> [snip]
>
> What do you mean by 'optimized compilation process'?

Optimising at least a little, compilation time.


--
Loïc

David Eng

unread,
Jun 3, 2004, 4:09:53 PM6/3/04
to
da...@boost-consulting.com (David Abrahams) wrote in message news:<ufz9da...@boost-consulting.com>...

> >>There are no powers that be; just a bunch of people who get together
> >>every 6 months to try to make progress, and spend a lot of time
> >>having email discussions and writing proposals in between meetings.
> >>You can either participate and make a difference or you can expect us
> >>to handle the things we have the time, energy, and expertise to work
> >>on.
> >
> > I feel insulted with your response like "either work through the
> > system or just shut up" when people have some suggestions or
> > questions about C++.
>
> Excuse me, but I think you owe me an apology for implying that I would
> say that, or anything like it. I never said _anything_ even a bit
> like that.

I am very sorry to misunderstand your posts. Believe me that I don't
have any disrespect toward you and committee. In contrast, I hold
high admiration of your work and contribution to the C++. I am
frustrated to see the popularity of C++ decreasing. I just offer my
opinion in term of making C++ standard process more efficient in order
to compete with Java and C#. Based on your posts (excuse me if I
mistake it due to my poor english), the problems are C++ standard
committee is lack of manpower and funding. But I think there are some
ways to overcome them.

> > I don't think there are any problems to raise several millions
> > dollar per year (consider having a good C++ standard, how much money
> > government can save).
>
> You must be joking. Do you have the time and expertise to raise
> several million dollars per year? If not, why do you assume that I
> do? Am I being trolled here?

I am not joking. You are well underestimated C++ standard name.
Professor Dog Schmidt can raise millions dollar for his CORBA C++
implementation, I cannot image why C++ standard committee cannot raise
the amount money. There are plenty resource here, considering
National Science Foundation, government agencies, larg corporations.
If you haven't try, how so sure that is joking? If C++ standard
committee allow me to represent her to raise money, I am more glad to
do so.

> > This process is cheap yet you get the best results. This is what
> > NASA does. NASA has very limited research capability.
>
> And comparatively enormous funding.
>
> > Most projects were done through Universities and research
> > institutions. NASA pays tuitions for PhD qstudents so that
> > Professors have funding to do research and write papers and students
> > are able to get degree. I believe that C++ standard committee is
> > able to do the same thing, instead of only having a bunch of people
> > who get together every 6 months.
>
> We already have a significant representation in research institutions
> and universities. Among them, Bjarne Stroustrup and Gabriel Dos Reis
> (Texas A&M), Jaakko Jarvi, Jeremy Siek, and Andrew Lumsdaine (Indiana
> University), and Douglas Gregor (RPI). There is no major barrier to
> entry for University researchers; on the contrary, it is easier in
> many ways for them to participate than people outside academia.
>
> That said, you might be surprised at how little interest there is in
> C++ among computer science researchers in general, mostly because of
> "aesthetic concerns".

You misunderstood my point (or I didn't represent clearly). You have
to funding the project. Professors get prompted based on two facts:
funding and research papers. When a professor get funding, he/she can
hire PhD students to do the research and write papers based on the
results of the research. C++ standard committee can select 3-5
projects every year. Then let Universities, research institutions, or
individuals to bid the project. This process is cheap because
professors are paid by University. The funding is to pay for PhD
student's tuitions. I bet there are a lot of professors will bid for
the project. If a professor cannot get funding, he/she will never get
prompted.

> > I agree that it shall be conservative to change the language, but it
> > shall be more aggressive to add or standardize libraries.
>
> Interesting choice of language. I don't know if you're a native
> English speaker or not, but you should know that it comes off
> sounding like a "royal decree" when you use "shall" that way. Maybe
> you mean "should"?

You are right. I come from a royal family. I am joking. Thanks for
teach me English. I am not a native English speaker.

> > If this is because committee doesn't have time and volunteers,
> > please see my suggestion above.
>
> That's a very simplistic view of what I think the reasons are, but
> it's basically on target. We also don't much time or many volunteers
> to go out and seek volunteers, set up new bureaucratic processes,
> implement new web applications, maintain servers, etc, etc.

It sounds like you know there are some problems, but you don't want to
make a change to correct them. I think the current C++ standard
process will not work. In term of supporting distributed computing,
C++ is far behind Java and C#. If C++ standard committee still keep
doing the same way she is doing, I only can think declining of C++
will be inevitable.

> > I also cannot believe the standard committee doesn't have a plan to
> > standardize a thread library in next C++ extension.
>
> Believe it.

Yes, I believe it now!

Eric Backus

unread,
Jun 3, 2004, 4:17:21 PM6/3/04
to

===================================== MODERATOR'S COMMENT:

This thread began as a discussion of how to develop proof-of-concept
implementations for proposed features. That's certainly relevant for
C++ standardization, and it's certainly relevant to talk about which
compilers might make suitable testbeds. However, it's also easy for
this thread to spin off into directions that aren't so directly
relevant to standardization. Let's try to do the former and not the
latter.


===================================== END OF MODERATOR'S COMMENT

"Loïc Joly" <loic.act...@wanadoo.fr> wrote in message
news:c9l73m$1dv$1...@news-reader5.wanadoo.fr...


> Even if its GNU status make it really available, it is a production
> compiler, and as such, it has to tackle many elements unnecessary for
> experimentation. Several programming languages, a very wide range of
> supported OS, a source code in C (I don't know why),

Perhaps having the source code in C makes it easier to bootstrap on unusual
platforms? In any case, I'd guess that C++ wasn't very stable when GCC was
first developed.

--
Eric

James Kuyper

unread,
Jun 3, 2004, 5:49:45 PM6/3/04
to
Loďc Joly <loic.act...@wanadoo.fr> wrote in message news:<c9l73m$1dv$1...@news-reader5.wanadoo.fr>...

..
> So, here come my question : Does anyone have another contender ? What I
> look for is :
..

> - A compiler written in C++, and in "good" C++
..

> - A compiler that does not care too much about exotic platforms (even if
> at least two supported platform seems a minimum)

Those two requirements are in conflict, as far as I'm concerned.
"good" C++ code produces correct results even on exotic platforms.

Gabriel Dos Reis

unread,
Jun 4, 2004, 12:05:36 PM6/4/04
to
On Thu, 3 Jun 2004, David Eng wrote:

| I am very sorry to misunderstand your posts. Believe me that I don't
| have any disrespect toward you and committee. In contrast, I hold
| high admiration of your work and contribution to the C++. I am
| frustrated to see the popularity of C++ decreasing.

I believe the perceived decrease in popularity of C++ is best attacked
by teaching people how they can write robuster, better codes in
current standard C++. Turning the language into much faster moving target
would be a real diservice. I don't think a programming language meant to
be taken seriously should be released every three months.
(For research languages, it is different -- you want to publish as
frequently as possible ;-))

We need more education, more exposure. We need better tools to support
development in C++.

--
Gabriel Dos Reis
g...@cs.tamu.edu
Texas A&M University -- Computer Science Department
301, Bright Building -- College Station, TX 77843-3112

---

llewelly

unread,
Jun 4, 2004, 12:06:13 PM6/4/04
to
s_googl...@nedprod.com (Niall Douglas) writes:
[snip]

> What I've been saying: The previously outlined (and traditional)
> process is inefficient because it makes no use of the large body of
> programmers out there ready to volunteer their time to "patching" the
> standard just like any piece of free software. Such people have little
> wish to involve themselves in the intricaces of the standard, the
> standardisation process nor even this newsgroup and most especially
> physically attending conferences. They want short duration quick in &
> out self-contained mini-projects.

However there are many interdependencies within the standard. E.g.,
the presence of overloading necessitates ADL. As far as the core
langauge is concerned, it is hard to see how there can be 'short
duration quick in & out self-contained mini-projects'

> While they aren't adverse to writing
> prose, they'd far prefer to write code.

Maybe there should be an organization encouraging the implementation
of proposed features whose specification is sufficiently clear?


>
> We know from complexity theory & software management theory that
> debugging software, unlike any other part of generating software, is
> very parallelisable.

The 'debugging' part of standardization is sumbission of issues with
the standard as described by
http://www.jamesd.demon.co.uk/csc/faq.html#B15 (by the way, I
could have sworn there was a description of how to open an issue
with the standard on the wg21 web site, but I cannot find it!).

The process for submitting a DR is about as open as it can
get. (However, like nearly all aspects of standardization, sadly
it is not getting communicated to the majority of the C++
comunity. ) The only non-open part I can see is that the standard
itself has unfortunate copying restricions on it.

> If you view "C++" as a two dimensional graph
> where the left is the ISO C++ standard text and the right is the
> implementation of the same in code, one quickly realises that small
> obvious features are nearly pure software.

IMO, at least as far as the core language is, there are no 'small
obvious features'. Whatever is added must have some interaction
with the numerous complex features already in place, and some of
those interactions will be subtle (Two examples: (a) Constructor
syntax and the most 'vexing parse', and (b) namespaces and ADL.),
and will require understanding the intricacies of the C++
standard to design and to implement.

> Therefore the debugging of same can be parallelised.

To a certain extent, the debugging of the C++ standard has already
been parallelised. It could be better parallelised if more of the
C++ community was aware of the standardization process.

> Therefore if more "out in the wild" experience is needed, this is an
> area ripe for improvement by applying the same methodologies free
> software production methodologies have shown so successful. Obviously
> how well such methodologies work is directly proportional to how far
> "right" the subject matter is on the left-right graph - if it gets too
> far left, it'll break down and become useless.
>
> Hence me suggesting it's only suitable for small features which
> everyone agrees on the general form thereof ie; very right side of the
> graph. And small, so they have the least interaction with other
> features and thus unwanted knock-on effects.

[snip]

I think these kinds of features, where they can be found, are
confined to potential standard library features, and exclude
potential core language features. Note that Boost focuses on
library features.

For potential standard library features, Boost serves as a de facto
instance of the very open source methodologies you seem to wish
for.

IOWs, much of what you ask for *already exists*, but it isn't well
advertised.

llewelly

unread,
Jun 4, 2004, 12:06:39 PM6/4/04
to
loic.act...@wanadoo.fr (Loïc Joly) writes:

> llewelly wrote:
>
>> Until very recently, (and perhaps even now) the majority of potential
>> developers knew C and not C++. Even now, most acutal GCC
>> developers, from what I can see, know C substantially better than
>> C++.
>
> I do not know if the expression translates well in English, but I
> would say in French that this is the story of the hen and the egg.

In English it is known as a 'chicken-and-egg problem'.

> Since GCC is written in C, people who know C++ and not C (I'm part of
> them) do not participate in GCC, thus GCC is only designed by C
> people, so it should be developped in C.
>
> ;)

Exactly. :) .

IIRC, EDG is also implemented in C. There are a good many sound
reasons for choosing C as an implementation langauge for
implementing C++. But those reasons grow weaker as the years go
by.

However - for an open source C++ implementation, it must be
recognized that most of those truly interested in contributing
will both more willing and more effective in C++ than C.

llewelly

unread,
Jun 4, 2004, 12:06:47 PM6/4/04
to
kuy...@wizard.net (James Kuyper) writes:

> Loïc Joly <loic.act...@wanadoo.fr> wrote in message news:<c9l73m$1dv$1...@news-reader5.wanadoo.fr>...


> ..
>> So, here come my question : Does anyone have another contender ? What I
>> look for is :
> ..
>> - A compiler written in C++, and in "good" C++
> ..
>> - A compiler that does not care too much about exotic platforms (even if
>> at least two supported platform seems a minimum)
>
> Those two requirements are in conflict, as far as I'm concerned.
> "good" C++ code produces correct results even on exotic platforms.

The trouble is that exotic platforms may only have partial C++
support (for example, ARM C++, or C++ without exceptions). At
some point you must compromise between portability and
expressiveness.

Alan Griffiths

unread,
Jun 4, 2004, 12:07:14 PM6/4/04
to
"Sergiy Kanilo" <ska...@artannlabs.com> wrote in message news:<2hv3hsF...@uni-berlin.de>...

> "Francis Glassborow" <fra...@robinton.demon.co.uk> wrote in message
> news:31WCzWJA...@robinton.demon.co.uk...
> >
> > Then join ACCU where we provide a bi-monthly report on what is happening
> > at WG21 and WG14.
>
> Could you give more datails about that. The ACCU Site states that
>
> "Those interested in the C and C++ standardisation processes may join the
> ISO Standards Development Forum. The ACCU provides these members with
> access to current standardisation material and a bi-yearly standards
> briefing letter ..."
>
> Unfortunately, I can not find how to join ISDF, and only other information
> about
> ISDF is http://www.accu.org/isdf/public/ dated by January 98

Unfortunatly the ACCU website is long overdue for updating - it is
maintained by a small group of volunteers who have other pressures on
their time.

The ISDF was once a "special interest group" within the ACCU and was
an optional extra on membership. Up to 1996 (or thereabouts) it
published its own bi-annual newsletter. Since then, reporting on
standards activities has moved to the main ACCU publications (C Vu and
Overload). Hence there is no longer a separate SIG to join - but the
option to donate to a fund that supports standards activities is still
available to members.

As Francis indicates, there is a bi-monthly report on standards
activities - and on a less frequent basis feature articles on specific
issues.
--
Alan Griffiths (former editor of the ACCU's ISDF Newsletter)
http://www.octopull.demon.co.uk/

Michiel Salters

unread,
Jun 4, 2004, 12:07:25 PM6/4/04
to
kuy...@wizard.net (James Kuyper) wrote in message news:<8b42afac.04060...@posting.google.com>...

> Loďc Joly <loic.act...@wanadoo.fr> wrote in message news:<c9l73m$1dv$1...@news-reader5.wanadoo.fr>...
> ..
> > So, here come my question : Does anyone have another contender ? What I
> > look for is :
> ..
> > - A compiler written in C++, and in "good" C++
> ..
> > - A compiler that does not care too much about exotic platforms (even if
> > at least two supported platform seems a minimum)
>
> Those two requirements are in conflict, as far as I'm concerned.
> "good" C++ code produces correct results even on exotic platforms.

No, they're absolutely not. The restriction on not supporting exotic
platforms refers to the output. A compiler basically produces machine
code. If a C++ compiler for x86 is written in perfectly portable
C++, you could turn it trivially into a cross-compiler, but targetting
the PowerPC is non-trivial.

This is probably most relevant when it comes to exception-handling.
Quite a few important topics like register-handling are only relevant
when it comes to production-performance.

Regards,
Michiel Salters

Terje Sletteb?

unread,
Jun 4, 2004, 12:08:05 PM6/4/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr8zw0y...@news.iol.ie>...

> On Wed, 2 Jun 2004 19:12:07 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
>
> > As I mentioned in other responses to Niall, we are actually working on
> > ways to make it more efficient. In fact *I* in particular have
> > expended a great deal of time and energy in that quest. If you had
> > any idea how difficult it can be to get a committee to even agree on
> > informal processes for making progress between meetings, you'd be a
> > lot more sympathetic to our efforts.
>
> ...

>
> Now I can't make you all do this - so far it would seem you
> all are totally opposed to any possibility of change whatsoever and won't
> even tolerate discussion about it - but then change is hard.

Uhm, I think you're making a lot of accusations to the people of this
group, that simply isn't true. The first quote above from Dave is from
the _same_ posting, and yet you claim "it would seem you all are


totally opposed to any possibility of change whatsoever and won't even

tolerate discussion about it".

The way I read Dave's posting, is the complete opposite: A desire to
improve things, and I haven't seen anything about not tolerating a
discussion about it.

Where do you get this impression from? I haven't seen any sign of it
at all. Could you be reading something into the postings, that isn't
actually there? You mentioned there might be a culture clash, and that
there could be talking on cross-purposes. Because of this, it's extra
important not to assume something that might not be the case, and
thereby risking insulting possibly a lot of people.

I think you should either point to actual quotes backing up your above
claim, or apologise for it, because with this way of discussing - what
I consider personal attacks - you risk alienating people, and making
them stop listening to you.

This is well meant, to try to help you avoiding pissing people off.
That is, if you do care about that. It certainly won't help your
cause.

Regards,

Terje

David Abrahams

unread,
Jun 4, 2004, 12:09:58 PM6/4/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> On Wed, 2 Jun 2004 19:12:07 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
>
> We can all agree I think that we'd all like the standardisation
> process to move more quickly.

Yes, but I'm not sure we all agree on what we mean by that. For
example, I don't think we should be publishing new official C++
standards every year or two.

> In particular, I and some of the experts would agree that more
> testing of proposed features "out in the wild" is especially
> desireable. The most recent friction has arisen due to strongly
> differing opinions on how to add manpower to the process.

I don't want to add manpower. In some cases the problem is too much
manpower ;-)

We actually have a problem with maximizing the effectiveness of those
who already have the time and energy to participate.

> From my understanding of what the experts think: Volunteers should
> subscribe to industry magazines and should attend where possible the
> six monthly conferences. They should participate in creating of
> amendments to the standard either through writing one themselves
> either alone or in conjunction with others and through reviewing,
> pondering and discussing amongst themselves any pitfalls or
> contraindications which may arise both within each proposal and
> between proposals.

Or through generating proof-of-concept implementations.

> What I've been saying: The previously outlined (and traditional)
> process is inefficient because it makes no use of the large body of
> programmers out there ready to volunteer their time to "patching" the
> standard just like any piece of free software.

As you rightly point out below, the standard can't be patched "just


like any piece of free software."

> Such people have little wish to involve themselves in the intricaces
> of the standard

If you're not willing to involve yourself in the intricacies, you're
in no position to write new standard text.

> the standardisation process nor even this newsgroup and most
> especially physically attending conferences. They want short
> duration quick in & out self-contained mini-projects. While they
> aren't adverse to writing prose, they'd far prefer to write code.

But the standard isn't code. Now it sounds like you're talking about
patching some code... but we don't have a standard C++ reference
implementation.

> We know from complexity theory & software management theory that
> debugging software, unlike any other part of generating software, is
> very parallelisable. If you view "C++" as a two dimensional graph
> where the left is the ISO C++ standard text and the right is the
> implementation of the same in code, one quickly realises that small
> obvious features are nearly pure software.

I certainly don't realize that.

First of all, please name *one* "small obvious feature" so that we can
get this discussion off the level of your assumptions that probably
are false. I've never seen one. Then name two more of same so we
can get the idea that there are enough to warrant new bureaucratic
structures and whatever else you're asking for.

Second, the smallest and most-obvious features seem to me to be about
80-90% proposal and deliberation.

> Therefore the debugging of same can be parallelised.
>
> Therefore if more "out in the wild" experience is needed, this is an
> area ripe for improvement by applying the same methodologies free
> software production methodologies have shown so
> successful. Obviously how well such methodologies work is directly
> proportional to how far "right" the subject matter is on the
> left-right graph - if it gets too far left, it'll break down and
> become useless.

?? I thought you said the graph was 2-D?

>> You know, what you're describing is very much like the open-source

>> qdevelopment process, where the mantra from maintainers to those who


>> want major changes that the maintainer has no time to implement is,
>> "patches welcomed". It's hard to understand why anyone would find
>> the same response insulting in the context of C++ standardization.
>
> Thing is, the ISO C++ standard is not a piece of software. Applying
> what you yourself and others suggested of "go do it yourself" can be
> shown historically to be most inefficable - for one thing, I have no
> ability to fork because forking is totally anti the whole point of
> having a standard (ie; most of the "value" of C++ is that it's
> *standard*). If one reads economic theory on information (I suggest
> "Information Rules" by Shapiro and Varian), it's clear that ad-hoc
> dispersed development of standards is a real no-no as indeed ISO's
> own guidelines suggest - there MUST be a centralised authority on
> all matters.

That's just wrong. There is not today a "centralised authority on all
matters". There is only a body voting on matters of which text ends
up in the standard.

As I said in an earlier post (which doesn't seem to be showing up),
when I made the first proof-of-concept for exception safety in the
standard library, I didn't ask for anyone's permission or blessing
first, nor did I ask the committee to set up less "primitive use of
the internet", or any of the other things you've been complaining
about. I just did it. It was taken seriously to the extent that it
was good, solid work.

You don't need a "centralised authority" to coordinate your efforts
either. You've been told repeatedly that making proofs-of-concept
for proposed features (like move/forwarding --- please do it!) would
be very helpful. You're response was basically, "why should I
believe you?" Well, I give up. As Terje implied, you should either
do something or stop complaining.

> That's why I see no point in working alone on this, or indeed
> anything less than a substantial number of people being
> involved.

Then it's up to you to recruit those other people. They're not going
to volunteer to work with you just because some committee "decrees"
it -- even if we *were* to make such decrees.

> Standards are valuable because they increase interoperability &
> predictability in an uncertain world which lowers costs & everyone
> benefits. Anything which works against that will ossify as it's
> against the natural order of things - think of it like an economic
> environment where making a profit kills your company (eg; like a
> department in a university - if it doesn't go over budget, it will
> have its budget cut).

Your analogy makes no sense to me.

> The whole point of what I am saying is that by investing work now
> you can save yourselves oodles of work in the future ie; you must
> spend money to save money. Now I can't make you all do this - so far
> it would seem you all are totally opposed to any possibility of
> change whatsoever

You really haven't been reading the responses to your posts, have you?
Didn't you see the part where I wrote

"we are actually working on ways to make it more efficient. In
fact *I* in particular have expended a great deal of time and
energy in thatquest"

??

Is that not utterly inconsistent with the idea that we "all are
totally opposed to change"?

Oh, you even quoted it in your own posting. You should really try
harder to integrate the information that's staring you in the face
before you post.

> and won't even tolerate discussion about it

Just stop it now. The only discussion that's not well-tolerated by me
is baseless accusation that "discussion isn't tolerated".

> Also, you guys must yourselves invest this time - if you try and
> push it off onto other people

Nobody's pushing it off. If you want it, make it happen.

> then it loses legitimacy. Your names are half of what makes the C++
> standard what it is - I can count off the top of my head at least
> ten experts by name simply because they have written articles in CUJ
> or MSDN - and when they themselves do something it is like a nexus
> and with them everyone else turns. The name "Niall Douglas" thus
> far means nothing to anyone

It's beginning to mean something.

> and what I do has no effect on others whatsoever - hence me doing
> something as you all have proposed is pointless.

Then become "an expert." You're just making lame excuses for an
unwillingness to but your own butt on the line. My name meant
"nothing to anyone" when I decided to try to have an impact on the C++
committee. Nothing other than your own insistence makes you less
qualified to make a difference than anyone else.

>> As I mentioned in other responses to Niall, we are actually working on
>> ways to make it more efficient. In fact *I* in particular have
>> expended a great deal of time and energy in that quest. If you had
>> any idea how difficult it can be to get a committee to even agree on
>> informal processes for making progress between meetings, you'd be a
>> lot more sympathetic to our efforts.
>

> You won't know this, but I know precisely what it's like...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

James Kuyper

unread,
Jun 4, 2004, 12:10:45 PM6/4/04
to
llewe...@xmission.dot.com (llewelly) wrote in message news:<86ise9k...@Zorthluthik.local.bar>...
..

> Few. Also, note the current maintainers of g++ are more opposed
> to extensions than not.

So, who says you need their permission? It's freely available source
code that you're allowed to modify after copying.

James Kuyper

unread,
Jun 4, 2004, 12:11:12 PM6/4/04
to
davide...@yahoo.com (David Eng) wrote in message news:<6b74193f.0406...@posting.google.com>...
..

> > You must be joking. Do you have the time and expertise to raise
> > several million dollars per year? If not, why do you assume that I
> > do? Am I being trolled here?
>
> I am not joking. You are well underestimated C++ standard name.
> Professor Dog Schmidt can raise millions dollar for his CORBA C++
> implementation, I cannot image why C++ standard committee cannot raise
> the amount money. There are plenty resource here, considering
> National Science Foundation, government agencies, larg corporations.

If you believe that this can be done, the C++ standard committee would
probably welcome you with open arms, if you were to volunteer to carry
out the fund-raising activity. I think you'll quickly find out why
they haven't done this.

Also, keep in mind the golden rule: "he who has the gold, makes the
rules". As it currently stands, the C++ standard is produced under a
system that is funded mainly by sales of copies of the standard. That
means that the main financial incentive the standard organizations
have is to produce a standard that maximizes the number of people
willing to pay for a copy of it. That strikes me as a good incentive -
the best way to satisfy it is to produce a standard that is so
well-designed that it becomes widely used. Of course, it's up to those
organizations themselves to decide whether to pay attention to that
incentive, but at least it's an incentive in the right direction.

With government or corporate financing, the main financial incentive
would be to satisfy the needs of the goverment or companies providing
the financing. The interests of other users would have little or no
financial significance.

Francis Glassborow

unread,
Jun 4, 2004, 1:42:54 PM6/4/04
to
In article <Pine.GSO.4.58.04...@unix.cs.tamu.edu>, Gabriel
Dos Reis <g...@cs.tamu.edu> writes

>We need more education, more exposure. We need better tools to support
>development in C++.

Yes, and we also need better education. That is why I am canvassing for
a technical report on teaching C++. Nothing large but as well as advice
on how C++ should be taught it could include several small APIs
specifically to allow teaching to include some graphics, keyboard, mouse
and sound. None of these are necessary for professional programmers but
each significantly adds to the potential for more positive teaching.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Niall Douglas

unread,
Jun 4, 2004, 8:45:16 PM6/4/04
to
On Fri, 4 Jun 2004 16:06:13 +0000 (UTC), llewelly
<llewe...@xmission.dot.com> wrote:

>> What I've been saying: The previously outlined (and traditional)
>> process is inefficient because it makes no use of the large body of
>> programmers out there ready to volunteer their time to "patching" the
>> standard just like any piece of free software. Such people have little
>> wish to involve themselves in the intricaces of the standard, the
>> standardisation process nor even this newsgroup and most especially
>> physically attending conferences. They want short duration quick in &
>> out self-contained mini-projects.
>
> However there are many interdependencies within the standard. E.g.,
> the presence of overloading necessitates ADL. As far as the core
> langauge is concerned, it is hard to see how there can be 'short
> duration quick in & out self-contained mini-projects'

I disagree. Computer software programming is basically an exercise in
manipulating complexity. Therefore the requisite computer programming
skills of being able to hold in your mind all parts of a computer program
and the likely "ripple" effect of a change is well developed by necessity
within computer programmers.

Remember, I am not saying that core language changes should be directly
entered by this process I outline. I am saying that the implementation
experience core language changes require could be outsourced who then from
that experience return their findings to the ISO C++ committee stage. This
would allow "fast tracking" of such features into the standard but with
adequate vetting by the existing ISO processes beforehand.

>> While they aren't adverse to writing
>> prose, they'd far prefer to write code.
>
> Maybe there should be an organization encouraging the implementation
> of proposed features whose specification is sufficiently clear?

There appears to be a huge gulf between what the experts in this newsgroup
view as a small obvious feature and what I do. While the likelihood is
that I am wrong as I am the one with the least experience, I also am
probably quite different from you guys in viewing software generation as a
mostly organic process ie; there is very little initial planning and the
design "grows" during the implementation of the project. By implementing
in a highly modular fashion, one can cut out sections and replace them if
hindsight shows you were wrong. Or indeed, occasionally, one trashes the
software and begins again afresh.

You can really break your balls trying to design something of which you
have no implementation experience. You can labour, and labour, and labour
some more and you'll still cock it up. The reality is that core language
design is particularly prone to this problem - it's not like a library
where you can try out different solutions in your own time to decide on
the best one before changing the standard. I'm saying though that this
ability could be integrated into core language updates ie; introduce an
"evolutionary" element where feature specifics are designed organically
through experience.

To exhibit the potential of this way of doing things, compare typical
C++ libraries of ten or even five years ago against modern ones. Previous
C++ libraries were not exception safe, had numerous bad practices and the
poor quality of these contributed greatly to the poor name C++ holds in
academia. However through experience, modern C++ libraries have evolved
into high quality solutions - as robust, secure and efficient as those in
any other language.

> I think these kinds of features, where they can be found, are
> confined to potential standard library features, and exclude
> potential core language features. Note that Boost focuses on
> library features.

You are one of many who have said this. Yet apart from some initial
comments about how I feel the STL could be improved, I have since focused
entirely on language features. It is interesting how people understand
something different from what I write, but then most have dismissed all my
arguments out of hand as the blatherings of a nincombpoop.

> For potential standard library features, Boost serves as a de facto
> instance of the very open source methodologies you seem to wish
> for.
>
> IOWs, much of what you ask for *already exists*, but it isn't well
> advertised.

A library can only do what the core language permits. As experiments have
shown, implementing move semantics using existing language features is
unwieldy at best and dangerous at worst. Implementing dynamic library
support without new language features produces highly inefficient code,
hence some major compilers adding new keywords. Metaprogramming is
probably the best example at the same time of both how powerful
C++ templates are while at the same time how limited the language is in
direct support of the same.

There are core language updates obvious to anyone with any experience of
C++ which could be added within no more than a year. Their details need
not be designed as they can be evolved. I am certain of it.

I am not here to talk about libraries - if there's something I don't like
I go write it on my own and don't waste time posting in here. I am talking
exclusively about core language features whose lack thereof is limiting my
ability to get on with things - hence my efforts within here. Even if I am
booed off the stage as a malcontent, maybe I at least will sow
subconscious seeds which will sprout in the future.

Cheers,
Niall

David Abrahams

unread,
Jun 5, 2004, 12:02:30 AM6/5/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

On Mon, 31 May 2004 01:52:53 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
>
>>> I could however go patch GCC to add move constructors though
>>> unfortunately the GCC maintainers would not accept such a patch into
>>> the mainline as the standard hasn't ruled on it yet.
>>
>> I'm not sure that's a foregone conclusion. GCC already contains
>> numerous extensions beyond the official C/C++ language standards. I
>> think the libstdc++-v3 developers might welcome the opportunity to use
>> move/forwarding, too.
>
> I base that on what they said to me in response to suggestions I
> made. I understand it - I'd do the same if I were them.

Given the way you've misread peoples' responses to you in this thread
message, I don't know whether to think they actually said that, or it's
just your misinterpretation of some entirely different message.

>> You don't need a "fast track mechanism" in order to pursue that
>> course. Demonstrating implementability and usefulness, and producing
>> an implementation for reference, would in itself have a huge impact on
>> getting any feature approved.
>
> Of course I could just go do this - however would it be worthwhile in
> me doing so?

Didn't I just get finished telling you so?

> If I were going it alone then there's zero reason to
> expect anyone else would participate and every reason to expect it
> would be wasted work

OK, I give up. I tried to tell you that it wouldn't go to waste, that
it would make a huge difference. If you won't take my word for it, I
don't know what more I can do.

> especially given the GCC maintainers aren't keen on it

> either. That's why such actions need official mandate

What sort of mandate would you like?

> eg; if a certain country had sought UN approval before pressing
> ahead regardless they'd have removed many of the extra hurdles they
> now face.


>
> And besides, I was suggesting that the standardisation process overall
> would be made more efficient by setting up a generic scheme like I
> outlined

Like I said before, be my guest, but...

> to fast track the uncontentious additions.

..as I and others have also said, there are no "uncontentious
additions" that it would be appropriate to approve on the time scale
you're talking about.

> It needs to be approved by the powers-that-be

Who's that, again?

> to work properly because that brings with it instant legitimacy as
> an official arm of the standardisation process. And legitimacy is
> everything here.

Legitimacy in the standardisation process is conferred by relevance
and solid work. I didn't ask anyone to set up an administrative
structure and pass an official edict before I wrote the first
exception-safe standard library components, to prove that adding
exception safety to the standard was possible and without negative
performance impacts. You don't need any of these things you're asking
for in order to get started on core feature implementation, create
momentum, get others to participate, and be taken seriously. Results
talk, and you-know-what walks.

>>> And not just me, I can see a lot of people becoming interested
>>> because now they actually can DO something - not by talking across a
>>> period of years and spending thousands of euros visiting expensive
>>> international conferences but actually sitting down and coding.
>>
>> You make it sound like coding is the "real work". Well, it's
>> important, but someone has to design the features (a job you've
>> clearly been underestimating) and generate some agreement for them.
>
> Ach, you totally misunderstand (but you're hardly alone). Go back and
> look at my previous posts - I never once said that this process should
> be for major features.

There are no minor features for which specification is unimportant.

> It could only ever work for uncontentious
> features everyone agrees are needed

But we've already told you that those basically don't exist.

> and even then, those whose approximate form is already known ie;
> those features which need "implementational experience".

*All* features can benefit hugely from implementation experience.

> The higher level stuff should remain a committee led process. I'm
> talking purely small "obvious to everyone" features - hence my use of

> "fast-trackable" to denote them. I qualified what I meant by "fast
> track" a number of times but various posters repeatedly either twisted
> my words or ignored my explanations.

Oh, please stop. That hasn't happened. You've apparently ignored
those of us who've told you that nothing's small enough to be
fast-tracked in the way you've described.

>>> We'd also need a FREE copy of the full standard to generate diffs
>>> against
>>
>> I thought you wanted to work on code, not specifications (?)
>
> The people who do the code are surely the best placed to write the
> first draft of changes given that they would know what's required the
> most?

Not neccessarily; drafting specifications is a very different skill
from coding.

>> What kind of assurance do you _normally_ get that people will take
>> your results seriously before you start a project? You didn't ask
>> for assurance before posting here...
>
> There is a huge difference between starting/forking a project and
> contributing to an existing one.

If you say so.

> Actually, after your last post before this one I was going to revanish
> again having concluded further posts would be pointless. You all may
> not realise this, but I really respect all you guys and I'm sorry I
> seemed to have annoyed you all (but then that's precisely why I avoid
> groups like this as I stated at the outset). OTOH, I have tried to
> suggest ways you all could improve things from my point of view and

> have been singularly told "either work through 'the system' or shut
> up".

Talk about twisting words. Nobody said that to you, and you should
probably apologize to the group for claiming that they did. I've been
trying to tell you that if you want to change the world you have to
put your own butt on the line. Making suggestions about what other
(volunteer) people should do is not going to make a difference,
especially when those people are part of a group with an existing way
of working. If you want to make a difference, stop telling other
people what they ought to do, and take some initiative yourself.

> Well I'm sorry for saying this, but that's not very open-minded -


> and while I may be wrong, or be the 200th person to suggest the same,

> this angry "to question my actions is to insult me" attitude reminds


> me of the unproductive academic clique typical in most universities
> (please note Dave I am not saying this of you specifically - I have
> the very highest of regard for you - but generally of the majority of
> responses I have received).

Well, I've been as hard on you as anyone here, so it's hard to
imagine what you're referring to.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

Thorsten Ottosen

unread,
Jun 5, 2004, 12:02:35 AM6/5/04
to
| We need more education, more exposure. We need better tools to support
| development in C++.

which tools are you talking about?

br

Thorsten

Francis Glassborow

unread,
Jun 5, 2004, 12:54:21 PM6/5/04
to
In article <opr83h12...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>There are core language updates obvious to anyone with any experience
>of C++ which could be added within no more than a year. Their details
>need not be designed as they can be evolved. I am certain of it.

Name one. All the experience I have says that they do not exist. Please
note that I specialise in trying to identify small changes, additions
etc. that would make C++ more usable yet I have never managed to come up
with a single change that has no ramifications elsewhere.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Terje Sletteb?

unread,
Jun 5, 2004, 1:11:39 PM6/5/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr83h12...@news.iol.ie>...

> On Fri, 4 Jun 2004 16:06:13 +0000 (UTC), llewelly
> <llewe...@xmission.dot.com> wrote:
>
> Remember, I am not saying that core language changes should be directly
> entered by this process I outline. I am saying that the implementation
> experience core language changes require could be outsourced who then from
> that experience return their findings to the ISO C++ committee stage. This
> would allow "fast tracking" of such features into the standard but with
> adequate vetting by the existing ISO processes beforehand.

This already exists! The STL is one example. By producing a working
proof of the concept, it was included into the standard, at a point
where, as I understand it, the attitude was to stop adding to the
standard and finishing it.

This is a library change, and you've said you're talking about core
language changed, but I would think the situation isn't much different
there. By creating a proof of concept, a proposal stands a much better
chance of being taken seriously, and might be processed faster (as the
"implementability"-part has been demonstrated).

> There appears to be a huge gulf between what the experts in this newsgroup
> view as a small obvious feature and what I do. While the likelihood is
> that I am wrong as I am the one with the least experience, I also am
> probably quite different from you guys in viewing software generation as a
> mostly organic process ie; there is very little initial planning and the
> design "grows" during the implementation of the project. By implementing
> in a highly modular fashion, one can cut out sections and replace them if
> hindsight shows you were wrong. Or indeed, occasionally, one trashes the
> software and begins again afresh.
>
> You can really break your balls trying to design something of which you
> have no implementation experience. You can labour, and labour, and labour
> some more and you'll still cock it up. The reality is that core language
> design is particularly prone to this problem - it's not like a library
> where you can try out different solutions in your own time to decide on
> the best one before changing the standard. I'm saying though that this
> ability could be integrated into core language updates ie; introduce an
> "evolutionary" element where feature specifics are designed organically
> through experience.

I think you'll find that many in this group (myself included), are
passionate about agile/evolutionary development. However, a standard
needs stability to be taken seriously. As you've probably heard,
there's millions of lines of code out there, and if you constantly
change the standard, possibly breaking a lot of it, then you get a
huge problem.

In short, agile/evolutionary/organic development is a great idea, but
you have to consider the context.

This doesn't mean that this approach can't be used to develop proof of
concepts of proposals, or, in the case of libraries, high-quality
libraries. You say this last point yourself, later in the posting.

> There are core language updates obvious to anyone with any experience of
> C++ which could be added within no more than a year. Their details need
> not be designed as they can be evolved. I am certain of it.

As Dave said in his posting: Name one.

There's been a lot of abstract discussion, here, and hardly any
concrete examples. Let's do it in the spirit of agile development, and
get down to the concrete stuff. :)

That'll get us something real to consider.

> Even if I am
> booed off the stage as a malcontent, maybe I at least will sow
> subconscious seeds which will sprout in the future.

So far, I haven't seen anything that could have that effect.

Regards,

Terje

David Abrahams

unread,
Jun 5, 2004, 1:11:47 PM6/5/04
to
Loďc Joly <loic.act...@wanadoo.fr> wrote in message news:<c9l73m$1dv$1...@news-reader5.wanadoo.fr>...

> Even if its GNU status make it really available, it is a production
> compiler, and as such, it has to tackle many elements unnecessary for
> experimentation. Several programming languages, a very wide range of

> supported OS, a source code in C (I don't know why), a CVS process

> designed for stability, an optimized compilation process...
>

> So, here come my question : Does anyone have another contender ? What I
> look for is :

> - A compiler not micro optimised


> - A compiler written in C++, and in "good" C++

> - A compiler with source code available


> - A compiler that does not care too much about exotic platforms (even if
> at least two supported platform seems a minimum)
>
>

> Does this compiler exist ?

I don't think so. There are projects like OpenC++ that are in the
same ballpark. I think for most purposes a compiler that produced C
(or even C++) code would make a fine platform for such
experimentation.

Niall Douglas

unread,
Jun 6, 2004, 7:52:47 PM6/6/04
to
On Fri, 4 Jun 2004 16:09:58 +0000 (UTC), David Abrahams
<da...@boost-consulting.com> wrote:
=

>> We can all agree I think that we'd all like the standardisation
>> process to move more quickly.
>
> Yes, but I'm not sure we all agree on what we mean by that. For
> example, I don't think we should be publishing new official C++
> standards every year or two.

I see no problem with one major release every three years with two point
releases in between on a yearly basis. I base this on the cycle of
computing roughly iterating along multiples of 18 months due to Moore's
prediction. Interestingly, software in general seems to follow a similar
pattern when in a competitive environment.

I've seen a general view that C++ must not change too quickly. I
completely agree. However I would view major releases as the ONLY ones
which can break existing code - the point releases can not under any
circumstance change anything, they may only augment.

Anyone familiar with maintaining a library's ABI will know exactly what I
mean here.

>> In particular, I and some of the experts would agree that more
>> testing of proposed features "out in the wild" is especially
>> desireable. The most recent friction has arisen due to strongly
>> differing opinions on how to add manpower to the process.
>
> I don't want to add manpower. In some cases the problem is too much
> manpower ;-)
>
> We actually have a problem with maximizing the effectiveness of those
> who already have the time and energy to participate.

Ah, precisely the core of the problem I am suggesting solutions for! All
organic life in this universe organises itself - you merely need to create
an environment which fosters that self-organisation and the rest flows
from within under its own power. I'm sure Dave you have read books by Fred
P. Brookes and witnessed the remarkable self-organising enabler of the
internet?

>> the standardisation process nor even this newsgroup and most
>> especially physically attending conferences. They want short
>> duration quick in & out self-contained mini-projects. While they
>> aren't adverse to writing prose, they'd far prefer to write code.
>
> But the standard isn't code. Now it sounds like you're talking about
> patching some code... but we don't have a standard C++ reference
> implementation.

But then I'm not proposing the patching of the standard directly. The
committee is the only one able to do this and indeed should do this.

This isn't to say it can't be helped along. For example, if government
wants a new law they can either draft one themselves, form a committee to
do so or ask someone external to provide one eg; a trade body or even the
public. It's hardly unknown for a vested interest to write the bill in its
entirity and pass it to a government minister who then presents it as
their own.

You guys propose an amendment as you already do. If it's reasonably small
and could be patched into existing compilers relatively easy you submit it
to the Bugzilla database and on to some front web page. Volunteers create
a patch implementing it for (say) GCC. Other volunteers apply this patch,
add support to their code and test it, refining the patch through comments
and further patches ie; evolutionary.

One month before the biannual meeting of the ISO C++ committee reports on
the progress of the real-world testing of the amendment are generated. You
guys at the committee meeting review them and after you feel you have a
handle on it, you generate a proposed amendment to the standard which is
posted again on the Bugzilla database for review. Or perhaps a volunteer
writes a proposed amendment which you guys review which saves you a step.
Either way, all roads still eventually lead through the traditional ISO
C++ committee.

What you gain and you are currently lacking is an army of volunteers who
put your ideas into practice so you can see the end results and thus can
both prevent mistakes and considerably shorten the time required for
finalising the form of new features.

I've spelt this out in index bulleted points in my post to Francis
Glassborow.

>> If you view "C++" as a two dimensional graph
>> where the left is the ISO C++ standard text and the right is the
>> implementation of the same in code, one quickly realises that small
>> obvious features are nearly pure software.
>
> I certainly don't realize that.
>
> First of all, please name *one* "small obvious feature" so that we can
> get this discussion off the level of your assumptions that probably
> are false. I've never seen one. Then name two more of same so we
> can get the idea that there are enough to warrant new bureaucratic
> structures and whatever else you're asking for.

As we've discussed them already, how about two: (i) move constructors and
(ii) dynamic library support. How both should work any C++ programmer of
any experience will have an automatic instinct for.

I also am not asking for bureaucratic structures! After an initial
expenditure of effort, the system I am proposing should be nearly
self-operating and should consume virtually none of any committee member's
time. In fact, it should save substantial amounts of committee time! Why
do software projects run Bugzilla or something like it if this weren't the
case?

> Second, the smallest and most-obvious features seem to me to be about
> 80-90% proposal and deliberation.

I said this in the post I made to llewelly just before this one, but
manipulating large complex abstract sets of intertwined relationships is
precisely what computer programming is. From this viewpoint, the standard
is no different - changes will have a ripple effect which will require
mitigatory measures. Designing with little or no implementational
experience is extremely tough with a high rate of failure - therefore take
a superior approach and get yourself implementational experience - evolve
your design through practice.

Of course you yourself Dave will strongly disagree with this sentiment -
just as we disagreed on c++-sig all those months ago (it's the same core
argument). However ponder this - could you have possibly written
Boost.Python v2 without having gone through writing Boost.Python v1 first?

>> there MUST be a centralised authority on
>> all matters.
>
> That's just wrong. There is not today a "centralised authority on all
> matters". There is only a body voting on matters of which text ends
> up in the standard.

A centralised authority doesn't need a name nor mandate. It simply needs
to be observed by enough people and it becomes so.

No matter how chaotic and decentralised you might think the
C++ standardisation process is, it is still highly hierarchical. I know of
no compiler vendor who is busy augmenting C++ with their own features as
vendors have become burned by that in the past - the D language is the
closest thing I know to it but it's distinctly non-commercial. Like it or
not, every compiler vendor has their eyes turned towards the standard and
thus every C++ programmer is similarly though indirectly the same.

You are a centralised authority.

> As I said in an earlier post (which doesn't seem to be showing up),
> when I made the first proof-of-concept for exception safety in the
> standard library, I didn't ask for anyone's permission or blessing
> first, nor did I ask the committee to set up less "primitive use of
> the internet", or any of the other things you've been complaining
> about. I just did it. It was taken seriously to the extent that it
> was good, solid work.

As I've explained in other posts, third party libraries are quite
something different to core language features. And I'll be getting back to
you on good solid work done without anyone's permission or blessing first
in a few months time though it'll be in a field quite unrelated to C++.

> "we are actually working on ways to make it more efficient. In
> fact *I* in particular have expended a great deal of time and
> energy in thatquest"
>

> Is that not utterly inconsistent with the idea that we "all are
> totally opposed to change"?

I used the phrase "you all" as using "ye" isn't Queen's English anymore. I
meant the plural form of "you" which by definition doesn't mean everyone.

In fact Dave, the way you've phrased your replies has been the most
productive of all which is why I keep replying to your posts in
particular. We may or may not be making progress - I don't know, we'll
know next few posts.

>> and what I do has no effect on others whatsoever - hence me doing
>> something as you all have proposed is pointless.
>
> Then become "an expert." You're just making lame excuses for an
> unwillingness to but your own butt on the line. My name meant
> "nothing to anyone" when I decided to try to have an impact on the C++
> committee. Nothing other than your own insistence makes you less
> qualified to make a difference than anyone else.

There is so much wrong with this statement I don't know where to begin. I
could indeed write this length of post just about why your statements here
are totally incorrect and worse, are dangerous forms of thinking (see
books by Alain de Boton). However, it would be distinctly off-topic
despite being relevent so I will condense it heavily:-

Experts don't become, they are made - just like no president of the United
States didn't come from a privileged background. Experts become recognised
as such because they came up with the goods at the right place & time on
one or more occasions - in your case, an exception safe STL just when
people were wondering if it were possible within a timeframe when doing so
got widespread notice. If any of these factors had been out you would be
as anonymous as I am, just as Einstein would be unknown if he had deduced
relativity either before or after when he did. All actions are more a
product of the culture & society in which the person is born than any
characteristic or achievement of that individual themselves. I know this
reality will be alarming to anyone raised in a capitalist system where the
false motivator of individual potential reigns supreme, but it is so
nevertheless.

If you create an environment which *enables* something you desire then it
will come about with no micro-managing required. *I* cannot create such an
environment because I do not have the authority to do so - therefore I
would have to both create myself as an authority and create the
environment at the same time which is very much harder. If I were to press
ahead alone, how could I issue press statements to media outlets in the
name of the ISO C++ committee calling for volunteers to implement an
amendment? How could I even run a website officially endorsed by the ISO
C++ committee? *I* can't do any of this, but a person entitled by mandate
of the central authority CAN and that's why it's pointless for me to do
this alone.

I have "placed my butt on the line" on numerous occasions but only when I
felt it was worth doing so in the long run. Feel free to search the
internet and trawl up the good and bad of my history much of which is in
the public domain. I know I cannot ever contribute anything useful to
C++ as the standardisation process currently stands so I do not delude
myself into thinking otherwise - far better to concentrate on other areas
where I could make a difference. After all, the total capacity for human
output is quite finite!

Cheers,
Niall

Niall Douglas

unread,
Jun 6, 2004, 7:53:04 PM6/6/04
to
On Sat, 5 Jun 2004 17:11:39 +0000 (UTC), Terje Sletteb?
<tsle...@hotmail.com> wrote:

> I think you'll find that many in this group (myself included), are
> passionate about agile/evolutionary development. However, a standard
> needs stability to be taken seriously. As you've probably heard,
> there's millions of lines of code out there, and if you constantly
> change the standard, possibly breaking a lot of it, then you get a
> huge problem.

Handy thing about having a patched compiler is people can try compiling
their software on it themselves to see if it breaks their code before it
becomes part of the standard.

> In short, agile/evolutionary/organic development is a great idea, but
> you have to consider the context.

Seeing as you're agreeing with me that this is the best way to quickly
develop new core language features, why not go all the way and agree that
an easily accessible formalised mostly-automatic mechanism for processing
these would be of great benefit to all concerned?

Cheers,
Niall

Niall Douglas

unread,
Jun 6, 2004, 7:53:30 PM6/6/04
to
>> There are core language updates obvious to anyone with any experience
>> of C++ which could be added within no more than a year. Their details
>> need not be designed as they can be evolved. I am certain of it.
>
> Name one. All the experience I have says that they do not exist. Please
> note that I specialise in trying to identify small changes, additions
> etc. that would make C++ more usable yet I have never managed to come up
> with a single change that has no ramifications elsewhere.

My whole point is that the ramifications you speak of can be identified
far more quickly with an improved process better utilising the resources
available. Ideally, a proposed revised standard could be tested in its
entirity by producing a test compiler implementing the new core language
before it was actually adopted as the standard.

While ideal, I can't see that happening without significant quantities of
additional funding and there's little commercial point in investing in
C++. However for smaller easier to implement features it should be
possible with volunteer time alone.

Here's an imaginary timeline:

1. Core language change proposed. Let's take N1377 as an example.
2. N1377 could be patched into GCC reasonably easily. Therefore it is
farmed out into a Bugzilla database. A call for volunteers is issued via a
mailing list and an announcement placed in slashdot etc.
3. Volunteers generate patch for GCC implementing N1377.
4. Volunteers download patch for GCC implementing N1377, patch their GCC,
add in support for move constructors to their code and test. Report
problems back to the Bugzilla database.
5. From feedback/contributions, patch is refined.
6. Iterate steps 4 & 5 repeatedly.
7. When patch is stable and everyone is happy, volunteers generate a
proposed amendment to the standard using the experience gained of the
ramifications, pitfalls and other issues. Proposed amendment/report goes
to next six-monthly ISO C++ committee.
8. Committee attendees review amendment/report. If they see no
contraindications with other proposed features it gets added via fast
track process. If there may be issues, they can defer addition in lieu of
new information, amend it or do whatever.

N1377 would surely take less than six months from beginning to working
implementation in GCC. Even if contraindications were found and dealt
with, it couldn't take longer than one year. Hence my original statement
above.

Cheers,
Niall

David Abrahams

unread,
Jun 6, 2004, 7:53:51 PM6/6/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr83h12...@news.iol.ie>...

> There are core language updates obvious to anyone with any experience of

> C++ which could be added within no more than a year. Their details need
> not be designed as they can be evolved. I am certain of it.

So, let's get down to specifics, taking move semantics for example (or
pick another feature; I don't care). What are you proposing, that the
committee say "we're going to have move semantics", without nailing
down precisely what that means or how it works? And then what? We
put that phrase in the working paper, and as people find problems with
the fact that it has no meaning, we make evolutionary adjustments
until everyone can agree that it's well-specified and works with the
rest of the standard text?

--
Dave Abrahams
boost-consulting
http://www.boost-consulting.com

llewelly

unread,
Jun 6, 2004, 7:54:34 PM6/6/04
to
kuy...@wizard.net (James Kuyper) writes:

> llewe...@xmission.dot.com (llewelly) wrote in message news:<86ise9k...@Zorthluthik.local.bar>...
> ..
>> Few. Also, note the current maintainers of g++ are more opposed
>> to extensions than not.
>
> So, who says you need their permission?

If you wish to run your own project, you don't. However, the most
widely used gcc variants are close to the FSF GCC project, and
co-operative with it. If an extension is accepted into the FSF
GCC project, it will recieve much wider use.

Maybe with wider use of a proposed extension, better guesses about
its utility, etc, can be made.

Francis Glassborow

unread,
Jun 7, 2004, 12:11:41 AM6/7/04
to
In article <opr85eqz...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes


That is it. I see absolutely no point in reading any more of your posts.
You have absolutely no idea about how complex a change such as
introducing move semantics is. Note that there have been some very
bright and exceptionally talented compiler implementors involved in
working on move semantics and yet you seem to think that they are just
dragging their feet when they tell you that it is just not that simple.

WG21 has many people serving on it who know a great deal about managing
change but you seem to be convinced that you know better than all of
them. Fine go away and do the work and prove us wrong but please do not
waste our time by telling us that we do not understand the process of
language development. Or do you think that you have unique insights that
the rest of us lack?


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

Francis Glassborow

unread,
Jun 7, 2004, 12:12:34 AM6/7/04
to
In article <opr85eql...@news.iol.ie>, Niall Douglas
<s_googl...@nedprod.com> writes

>As we've discussed them already, how about two: (i) move constructors
>and (ii) dynamic library support. How both should work any C++
>programmer of any experience will have an automatic instinct for.

Yes and every single one will have a different view. Move semantics is
fairly tough, none of us have any idea about how to reconcile the
approaches of two major OSs as regards dynamic libraries. If you think
otherwise then please sit down and write a paper explaining how we
should do it because at the moment we are struggling.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

---

David Abrahams

unread,
Jun 7, 2004, 12:13:26 AM6/7/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

Now remind us again please why you can't set up a bug database and
issue a call for volunteers here, thereby beginning this process? I
think you could get this whole thing started in a SourceForge project
in a few hours with absolutely no financial investment.

Boost was started without any official sanction from the committee or
any other investment from the committee per se. In the beginning it
was nothing more than a mailing list on egroups (now yahoogroups).
What you're describing would fill a very similar role to the one Boost
does in the library arena. I *know* that an effort like the one
you're describing would draw participation from interested members of
the committee, just like Boost has.

Once again:

a) It is up to those who have a clear and passionate vision of the
future to make it happen.

b) Absolutely *nothing* makes you a less-legitimate member of the C++
standardization community than anyone else, other than your saying so.

And now, I'm done with this conversation.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

David Abrahams

unread,
Jun 7, 2004, 12:14:56 AM6/7/04
to
s_googl...@nedprod.com (Niall Douglas) writes:

> On Fri, 4 Jun 2004 16:09:58 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
> =
>>> We can all agree I think that we'd all like the standardisation
>>> process to move more quickly.
>>
>> Yes, but I'm not sure we all agree on what we mean by that. For
>> example, I don't think we should be publishing new official C++
>> standards every year or two.
>
> I see no problem with one major release every three years with two
> point releases in between on a yearly basis.

You don't consider the fact that most of the implementors and quite a
large community of users would be outraged to be problematic?

> I base this on the cycle of computing roughly iterating along
> multiples of 18 months due to Moore's prediction. Interestingly,
> software in general seems to follow a similar pattern when in a
> competitive environment.
>
> I've seen a general view that C++ must not change too quickly. I
> completely agree. However I would view major releases as the ONLY
> ones which can break existing code

Breaking code every three years is unthinkable. The sentiment in the
committee at large hasn't even come around definitively to the idea of
breaking code for C++0x, a 10 year cycle.

> - the point releases can not under any
> circumstance change anything, they may only augment.

We're barely able to do "point releases" on a 5-year basis, as
revealed by our experience with TR1.

> Anyone familiar with maintaining a library's ABI will know exactly
> what I mean here.

Since I maintain several libraries, I know exactly what you mean.
Anyone familiar with language standardization will know that even
reasonably-solid point releases aren't possible on a 1-year cycle.

>> We actually have a problem with maximizing the effectiveness of those
>> who already have the time and energy to participate.
>
> Ah, precisely the core of the problem I am suggesting solutions for!
> All organic life in this universe organises itself - you merely need
> to create an environment which fosters that self-organisation and the
> rest flows from within under its own power. I'm sure Dave you have
> read books by Fred P. Brookes and witnessed the remarkable
> self-organising enabler of the internet?

Umm, as a founding member of Boost I don't think you need to lecture
me about that.

That process is entirely consistent with the structures that are
already in place. Anyone can submit a paper for committee review.
Generate your reports and if you think you're able to write proposed
standard language, please submit it; it's always better to have
something than nothing.

> What you gain and you are currently lacking is an army of volunteers
> who put your ideas into practice so you can see the end results and
> thus can both prevent mistakes and considerably shorten the time
> required for finalising the form of new features.
>
> I've spelt this out in index bulleted points in my post to Francis
> Glassborow.

Yes, they mostly look good; all that remains is for you to put them
into play.

>> First of all, please name *one* "small obvious feature" so that we can
>> get this discussion off the level of your assumptions that probably
>> are false. I've never seen one. Then name two more of same so we
>> can get the idea that there are enough to warrant new bureaucratic
>> structures and whatever else you're asking for.
>
> As we've discussed them already, how about two: (i) move constructors
> and (ii) dynamic library support. How both should work any C++
> programmer of any experience will have an automatic instinct for.

I can't believe you persist in making that assertion, despite having
been told that among experienced (even expert-level) programmers
there's considerable disagreement about how both of those features
should work.

> I also am not asking for bureaucratic structures! After an initial
> expenditure of effort, the system I am proposing should be nearly
> self-operating and should consume virtually none of any committee
> member's time. In fact, it should save substantial amounts of
> committee time! Why do software projects run Bugzilla or something
> like it if this weren't the case?

Because they need to manage their bugs. Bugzilla just saves time over
managing them on paper. Right now, we don't have any (code) bugs to
manage, so we don't need bugzilla. I don't see how adding code bugs
to our process is going to save any time.

>> Second, the smallest and most-obvious features seem to me to be about
>> 80-90% proposal and deliberation.
>
> I said this in the post I made to llewelly just before this one, but
> manipulating large complex abstract sets of intertwined relationships
> is precisely what computer programming is. From this viewpoint, the
> standard is no different - changes will have a ripple effect which
> will require mitigatory measures. Designing with little or no
> implementational experience is extremely tough with a high rate of
> failure - therefore take a superior approach and get yourself
> implementational experience - evolve your design through practice.
>
> Of course you yourself Dave will strongly disagree with this sentiment
> -

Of course you yourself Niall will continue to make incorrect and
marginally insulting assertions about my point-of-view.

I don't think you'll find anyone on the committee who disagrees with
the idea that standardizing things with implementations to back them
up is better. In fact it's a core principle of how we're supposed to
be operating.

>>> there MUST be a centralised authority on all matters.
>>
>> That's just wrong. There is not today a "centralised authority on all
>> matters". There is only a body voting on matters of which text ends
>> up in the standard.
>
> A centralised authority doesn't need a name nor mandate. It simply
> needs to be observed by enough people and it becomes so.
>
> No matter how chaotic and decentralised you might think the C++
> standardisation process is, it is still highly hierarchical. I know
> of no compiler vendor who is busy augmenting C++ with their own
> features as vendors have become burned by that in the past

It's true that there isn't a great deal of language extension going on
but you have the reasons completely wrong: it's largely because
vendors are completely occupied just trying to catch up with the
current standard (yes, even 6 years after standardization) and/or
implementing support for new platforms, optimizations, and other
features that fit entirely within the scope of the current language
definition. That's one reason it wouldn't be too smart to start
changing the core language every 3 years, especially not until a
substantial number of vendors have caught up to the current target.

> - the D language is the closest thing I know to it but it's
> distinctly non-commercial. Like it or not, every compiler vendor has
> their eyes turned towards the standard and thus every C++ programmer
> is similarly though indirectly the same.
>
> You are a centralised authority.

Me, personally??

>> As I said in an earlier post (which doesn't seem to be showing up),
>> when I made the first proof-of-concept for exception safety in the
>> standard library, I didn't ask for anyone's permission or blessing
>> first, nor did I ask the committee to set up less "primitive use of
>> the internet", or any of the other things you've been complaining
>> about. I just did it. It was taken seriously to the extent that it
>> was good, solid work.
>
> As I've explained in other posts, third party libraries are quite
> something different to core language features.

They're different but core feature proofs-of-concept require no more
blessing from a "centralised authority" than library proofs.

> And I'll be getting back to you on good solid work done without
> anyone's permission or blessing first in a few months time though
> it'll be in a field quite unrelated to C++.

And so quite irrelevant to this discussion.

>> "we are actually working on ways to make it more efficient. In
>> fact *I* in particular have expended a great deal of time and
>> energy in thatquest"
>>
>> Is that not utterly inconsistent with the idea that we "all are
>> totally opposed to change"?
>
> I used the phrase "you all" as using "ye" isn't Queen's English
> anymore. I meant the plural form of "you" which by definition doesn't
> mean everyone.

Well then who *are* you accusing of being totally opposed to change?
Be specific. Vaguely applied labels just amount to demagoguery.

> In fact Dave, the way you've phrased your replies has been the most
> productive of all which is why I keep replying to your posts in
> particular. We may or may not be making progress - I don't know, we'll
> know next few posts.

We're not. You are all fired up about an idea which might even be a
good one but you still expect someone else to make it happen.

I want you to know that the only reason I keep replying is to help
avert a feeding frenzy of people jumping to the wrong conclusions
about the way the committee works. I want to make it crystal clear to
everyone else reading these posts that you can make whatever
contribution you're willing to invest in, and that *nothing* is
stopping you other than the fact you insist that you are powerless.

>>> and what I do has no effect on others whatsoever - hence me doing
>>> something as you all have proposed is pointless.
>>
>> Then become "an expert." You're just making lame excuses for an
>> unwillingness to but your own butt on the line. My name meant
>> "nothing to anyone" when I decided to try to have an impact on the C++
>> committee. Nothing other than your own insistence makes you less
>> qualified to make a difference than anyone else.
>
> There is so much wrong with this statement I don't know where to
> begin. I could indeed write this length of post just about why your
> statements here are totally incorrect and worse, are dangerous forms
> of thinking (see books by Alain de Boton). However, it would be
> distinctly off-topic despite being relevent so I will condense it
> heavily:-
>
> Experts don't become, they are made - just like no president of the
> United States didn't come from a privileged background. Experts become
> recognised as such because they came up with the goods at the right
> place & time on one or more occasions - in your case, an exception
> safe STL just when people were wondering if it were possible within a
> timeframe when doing so got widespread notice.

Yeah, because I made it an issue. I made a big stink about it, then
argued my case, produced a reference implementation, and wrote papers
for the committee. That's why it got noticed. To a large extent I
created the context that made it matter.

> If any of these factors had been out you would be as anonymous as I
> am, just as Einstein would be unknown if he had deduced relativity
> either before or after when he did.

Enough philosophy. Are you implying that move semantics isn't a hot
enough topic right now for a proof-of-concept implementation to make a
difference?

> All actions are more a product of the culture & society in which the
> person is born than any characteristic or achievement of that
> individual themselves. I know this reality will be alarming to
> anyone raised in a capitalist system where the false motivator of
> individual potential reigns supreme, but it is so nevertheless.
>
> If you create an environment which *enables* something you desire
> then it will come about with no micro-managing required. *I* cannot
> create such an environment because I do not have the authority to do
> so -

That is utter hogwash. You have exactly as much authority to do it
for your favorite core language feature as I did for my pet library
feature.

> therefore I would have to both create myself as an authority and
> create the environment at the same time which is very much
> harder. If I were to press ahead alone, how could I issue press
> statements to media outlets in the name of the ISO C++ committee
> calling for volunteers to implement an amendment?

You don't need to. And given your desire to get things done quickly,
waiting for the committee as a whole to officially vote on and endorse
a resolution to call for volunteers seems highly counterproductive.

> How could I even run a website officially endorsed by the ISO C++
> committee?

You don't need to. Boost has no official endorsement, yet has a great
impact and lots of credibility. Who _specifically_, *do* you expect to
run the website, anyway?

> *I* can't do any of this, but a person entitled by mandate of the
> central authority CAN and that's why it's pointless for me to do
> this alone.

So don't do it alone. Ask people to join you. Lots of people are
interested. That's what I did on the exception-safety issue, and it
worked for me. That's what we did with Boost.

> I have "placed my butt on the line" on numerous occasions but only
> when I felt it was worth doing so in the long run. Feel free to search
> the internet and trawl up the good and bad of my history much of which
> is in the public domain. I know I cannot ever contribute anything
> useful to C++ as the standardisation process currently stands so I do
> not delude myself into thinking otherwise - far better to concentrate
> on other areas where I could make a difference. After all, the total
> capacity for human output is quite finite!

You're certainly generating a great deal of output that is making no
difference whatsoever. It's amazing how you continue to argue for the
impossibility of accomplishing anything on your own initiative.
That's the surest route to a self-fulfilling prophecy.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

---

Terje Sletteb?

unread,
Jun 7, 2004, 12:46:04 PM6/7/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr85eql...@news.iol.ie>...

> On Fri, 4 Jun 2004 16:09:58 +0000 (UTC), David Abrahams
> <da...@boost-consulting.com> wrote:
> =
> > We actually have a problem with maximizing the effectiveness of those
> > who already have the time and energy to participate.
>
> Ah, precisely the core of the problem I am suggesting solutions for! All
> organic life in this universe organises itself - you merely need to create
> an environment which fosters that self-organisation and the rest flows
> from within under its own power. I'm sure Dave you have read books by Fred
> P. Brookes and witnessed the remarkable self-organising enabler of the
> internet?

You may also be familiar with "The Timeless Way of Building", which I
recently read, and I which I found very illuminating.

It has the same ideas of spontaneous self-organisation, as in
organisms, and in communities.

> >> the standardisation process nor even this newsgroup and most
> >> especially physically attending conferences. They want short
> >> duration quick in & out self-contained mini-projects. While they
> >> aren't adverse to writing prose, they'd far prefer to write code.
> >
> > But the standard isn't code. Now it sounds like you're talking about
> > patching some code... but we don't have a standard C++ reference
> > implementation.
>
> But then I'm not proposing the patching of the standard directly. The
> committee is the only one able to do this and indeed should do this.
>
> This isn't to say it can't be helped along. For example, if government
> wants a new law they can either draft one themselves, form a committee to
> do so or ask someone external to provide one eg; a trade body or even the
> public. It's hardly unknown for a vested interest to write the bill in its
> entirity and pass it to a government minister who then presents it as
> their own.

You may also be familiar with this in the C++ standard, as well: It's
called writing a formal proposal. ;)

Anybody can do that, and submit it to the committee. Those can even be
as detailed as containing a proposed new wording in the standard for
the change/amendment.

> You guys propose an amendment as you already do. If it's reasonably small
> and could be patched into existing compilers relatively easy you submit it
> to the Bugzilla database and on to some front web page. Volunteers create
> a patch implementing it for (say) GCC. Other volunteers apply this patch,
> add support to their code and test it, refining the patch through comments
> and further patches ie; evolutionary.
>
> One month before the biannual meeting of the ISO C++ committee reports on
> the progress of the real-world testing of the amendment are generated. You
> guys at the committee meeting review them and after you feel you have a
> handle on it, you generate a proposed amendment to the standard which is
> posted again on the Bugzilla database for review. Or perhaps a volunteer
> writes a proposed amendment which you guys review which saves you a step.
> Either way, all roads still eventually lead through the traditional ISO
> C++ committee.
>
> What you gain and you are currently lacking is an army of volunteers who
> put your ideas into practice so you can see the end results and thus can
> both prevent mistakes and considerably shorten the time required for
> finalising the form of new features.

This might be an idea, but as has been mentioned in this thread,
someone has to arrange such a thing, set up and maintain webpages,
etc. Perhaps write a proposal for this to the committee, and see what
comes out of it? If you or someone else could help arranging things,
if there's interest for this, that would probably help, as well.

> No matter how chaotic and decentralised you might think the
> C++ standardisation process is, it is still highly hierarchical. I know of
> no compiler vendor who is busy augmenting C++ with their own features as
> vendors have become burned by that in the past

Daveed Vandevoorde of EDG is working on implementing a metaprogramming
extension to an experimental version of the EDG compiler...
(http://www.vandevoorde.com/Daveed/News/Archives/000014.html)

He's also on the standards committee, with the rest of EDG, as well.

> - the D language is the
> closest thing I know to it but it's distinctly non-commercial. Like it or
> not, every compiler vendor has their eyes turned towards the standard and
> thus every C++ programmer is similarly though indirectly the same.

"typeof" is implemented in EDG (provided as a GCC compatibility
extension), and EDG is certainly a commercial compiler.

> > Then become "an expert." You're just making lame excuses for an
> > unwillingness to but your own butt on the line. My name meant
> > "nothing to anyone" when I decided to try to have an impact on the C++
> > committee. Nothing other than your own insistence makes you less
> > qualified to make a difference than anyone else.
>
> There is so much wrong with this statement I don't know where to begin. I
> could indeed write this length of post just about why your statements here
> are totally incorrect and worse, are dangerous forms of thinking (see
> books by Alain de Boton). However, it would be distinctly off-topic
> despite being relevent so I will condense it heavily:-
>
> Experts don't become, they are made - just like no president of the United
> States didn't come from a privileged background. Experts become recognised
> as such because they came up with the goods at the right place & time on
> one or more occasions - in your case, an exception safe STL just when
> people were wondering if it were possible within a timeframe when doing so
> got widespread notice. If any of these factors had been out you would be
> as anonymous as I am, just as Einstein would be unknown if he had deduced
> relativity either before or after when he did.

I don't think so. Time and time again, world-famous people have said
that they got to where they are, because they burned for something,
believed in something, and worked to make their dream a reality.

I'd be inclined to agree with Disney: If you can dream it, you can do
it. :)

This has a risk of straying quite off-topic, but there's plenty of
examples of people who didn't come from a "privileged background", and
yet became great. If it wasn't like this, how could anyone ever
improve their situation?

Dave hasn't "only" come up with the exception safety specifications,
but also with other cool stuff. When you have talent, and use that
talent, these things can happen.

You said Dave's form of thinking is dangerous. I think it's the other
way around: If you think you can do something, you increase you chance
of actually succeeding. However, if you _don't_ think you can do
something, you'll give up before you start. Hence, it will become a
self-fulfilling prophecy. And _that's_ a dangerous way of thinking!
Someone having that attitude will think they'll never accomplish
anything, and, sadly, because of that belief, they'll likely be right.

> All actions are more a
> product of the culture & society in which the person is born than any
> characteristic or achievement of that individual themselves.

I don't believe that, and there's plenty of evidence to support my
claim. However, as I said, if people believe what you said, they may
also experience it, not because of any outside constraint, but because
they limit themselves.

Basically, you are what you think you are. ;)

> I know this
> reality will be alarming to anyone raised in a capitalist system where the
> false motivator of individual potential reigns supreme, but it is so
> nevertheless.

Do you have any support for this claim?

I know this kind of thinking is used in some societies/regimes to
artificically limit and oppress individuals, a "caste"-thinking. I
think the success of the capitalist system - and who it has worked
well for - show this belief not to be true.

> If you create an environment which *enables* something you desire then it
> will come about with no micro-managing required. *I* cannot create such an
> environment because I do not have the authority to do so - therefore I
> would have to both create myself as an authority and create the
> environment at the same time which is very much harder. If I were to press
> ahead alone, how could I issue press statements to media outlets in the
> name of the ISO C++ committee calling for volunteers to implement an
> amendment? How could I even run a website officially endorsed by the ISO
> C++ committee? *I* can't do any of this, but a person entitled by mandate
> of the central authority CAN and that's why it's pointless for me to do
> this alone.

_Cooperating_ with the standards committee, perhaps joining your
national body, may be a start. However, you said you don't want to
have anything to do with standards work. How can you expect the
standards committee to cooperate with you in setting this up, if you
don't want to have anything to do with them?

It sounds like you want to have your cake and eat it, too: Deciding
over the standards committee, without actually participating and
cooperating with it, yourself.

> I have "placed my butt on the line" on numerous occasions but only when I
> felt it was worth doing so in the long run. Feel free to search the
> internet and trawl up the good and bad of my history much of which is in
> the public domain. I know I cannot ever contribute anything useful to
> C++ as the standardisation process currently stands so I do not delude
> myself into thinking otherwise

If you feel this way, and don't want to join the standards process,
perhaps you could find someone already involved, and who might be
interested in what you suggest, to work for it.

Regards,

Terje

James Kuyper

unread,
Jun 7, 2004, 12:47:52 PM6/7/04
to
llewe...@xmission.dot.com (llewelly) wrote in message news:<86fz98f...@Zorthluthik.local.bar>...

> kuy...@wizard.net (James Kuyper) writes:
>
> > llewe...@xmission.dot.com (llewelly) wrote in message news:<86ise9k...@Zorthluthik.local.bar>...
> > ..
> >> Few. Also, note the current maintainers of g++ are more opposed
> >> to extensions than not.
> >
> > So, who says you need their permission?
>
> If you wish to run your own project, you don't. However, the most
> widely used gcc variants are close to the FSF GCC project, and
> co-operative with it. If an extension is accepted into the FSF
> GCC project, it will recieve much wider use.

One good way to get it accepted into the FSF GCC project is to do the
work yourself on an unofficial copy, and develop some experience with
it to demonstrate to them that it's worth putting into the official
copy. That can lead to wider exposure, more experience, and if your
idea is good enough, it may become "existing practice" that has to be
included in the standard because everyone's already using it.

James Kuyper

unread,
Jun 7, 2004, 12:48:20 PM6/7/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr83h12...@news.iol.ie>...

> On Fri, 4 Jun 2004 16:06:13 +0000 (UTC), llewelly
> <llewe...@xmission.dot.com> wrote:
>
> >> What I've been saying: The previously outlined (and traditional)
> >> process is inefficient because it makes no use of the large body of
> >> programmers out there ready to volunteer their time to "patching" the
> >> standard just like any piece of free software. Such people have little
> >> wish to involve themselves in the intricaces of the standard, the
> >> standardisation process nor even this newsgroup and most especially
> >> physically attending conferences. They want short duration quick in &
> >> out self-contained mini-projects.
> >
> > However there are many interdependencies within the standard. E.g.,
> > the presence of overloading necessitates ADL. As far as the core
> > langauge is concerned, it is hard to see how there can be 'short
> > duration quick in & out self-contained mini-projects'
>
> I disagree. Computer software programming is basically an exercise in
> manipulating complexity. Therefore the requisite computer programming
> skills of being able to hold in your mind all parts of a computer program
> and the likely "ripple" effect of a change is well developed by necessity
> within computer programmers.

The ripple effects of standard changes are a problem of much greater
magnitude, and most programmers I've known are nowhere near having
developed the skills needed to keep them in mind. It helps if you've
use a few dozen different computer languages, and if you've used C++
on a few dozen widely different platforms, but the number of
programmers with that kind of experience is quite rare. I'm not one of
those people, and I don't think most of the committee members are,
either. The committee achieves it's breadth of vision by being a
committee, not by having supermen for members. Even if most committee
members are unfamiliar with the problems that a given proposal might
make for one platform, they can hope that there's at least one member
who will know, and can explain those problems to the other committee
members. Without the admittedly slow committee process, such problems
would leak out into the published standard far more often than they
currently do.

..


> There appears to be a huge gulf between what the experts in this newsgroup
> view as a small obvious feature and what I do. While the likelihood is
> that I am wrong as I am the one with the least experience, I also am
> probably quite different from you guys in viewing software generation as a
> mostly organic process ie; there is very little initial planning and the
> design "grows" during the implementation of the project. By implementing
> in a highly modular fashion, one can cut out sections and replace them if
> hindsight shows you were wrong. Or indeed, occasionally, one trashes the
> software and begins again afresh.

It's all a matter of what you mean by "very little" and "initial".
Design without some implementation experience behind it is nearly
useless. However, implementation without design always ends up in a
horrible mess, if you take it too far (and most people do). It's a
judgement call how much time to assign to each activity, and in what
order, and my impression from your words is that you're not in the
habit of assigning enough time to design, or that you are in the habit
of putting it off until later than it should be, but I have no way to
know whether or not that's true.

However, I think you're missing a key point. The C++ standard is all
about design. The key purpose of the standard is to be part of the
requirements specification for implementations of C++. Implementation
experience is supposed to be taken into consideration in putting
together that specification, but standardization is not
implementation, and the quick-fix approaches that you're talking about
don't apply here.

A standard is intended to be widely used, so programs can rely on it.
It can take 5-10 years for a new standard to be widely implemented. It
doesn't make sense to make changes to it more quickly than that; doing
so simply creates a moving target that no implementation will ever
fully conform with.

James Kuyper

unread,
Jun 7, 2004, 12:48:39 PM6/7/04
to
s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr85eqz...@news.iol.ie>...

> >> There are core language updates obvious to anyone with any experience
> >> of C++ which could be added within no more than a year. Their details
> >> need not be designed as they can be evolved. I am certain of it.
> >
> > Name one. All the experience I have says that they do not exist. Please
> > note that I specialise in trying to identify small changes, additions
> > etc. that would make C++ more usable yet I have never managed to come up
> > with a single change that has no ramifications elsewhere.
..

> 1. Core language change proposed. Let's take N1377 as an example.

You're kidding, I hope? N1377 is your idea of a small,
non-controversial change with few ramifications, that therefore
doesn't require much consideration by the committee?

You have a very long way to go before you'll understand the true
complexity of computer language specification.

Terje Sletteb?

unread,
Jun 7, 2004, 6:35:30 PM6/7/04
to
kuy...@wizard.net (James Kuyper) wrote in message news:<8b42afac.04060...@posting.google.com>...
> s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr83h12...@news.iol.ie>...
> > On Fri, 4 Jun 2004 16:06:13 +0000 (UTC), llewelly
> > <llewe...@xmission.dot.com> wrote:
> >
> Design without some implementation experience behind it is nearly
> useless. However, implementation without design always ends up in a
> horrible mess, if you take it too far (and most people do). It's a
> judgement call how much time to assign to each activity, and in what
> order, and my impression from your words is that you're not in the
> habit of assigning enough time to design, or that you are in the habit
> of putting it off until later than it should be

When is it too late to design?

In agile development, you typically design as you develop, growing the
design, rather than having one big up front design (BUFD). It's not a
matter of design or not; there's a _lot_ of design in agile
development/XP, as well. The question is _when_ you design. In agile
devoplement, you typically hold off as long as possible, with the
design, which gives you most information to do a good design, rather
than the typical guesswork of a big up front design.

Naturally, it's a judgement call, as well.

The conventional view of the development process is typically one of:
design, code, test, debug. However, with test-first development, you
really have test, code, then design. :) (And you might more or less
dispense with the "debug" part).

The design, in this case, is much in the refactoring. As I said,
growing the design.

One might use the same way to evolve a standard. However, this doesn't
mean that you necessarily _publish_ a revised standard incrementally
like that. As has been pointed out in this thread: users and compiler
implementers have an advantage of stability, so they know that can
build something that lasts.

Bjarne Stroustrup made that point once, where he said that typically
language researches ask for new stuff all the time, to be added to
C++. While, if he asks the people "in the trenches", the working
programmers, the answer he got was: "Please, give us some stability."

Regards,

Terje

James Kuyper

unread,
Jun 8, 2004, 2:03:49 PM6/8/04
to
tsle...@hotmail.com (Terje Sletteb?) wrote in message news:<b0491891.04060...@posting.google.com>...

> kuy...@wizard.net (James Kuyper) wrote in message news:<8b42afac.04060...@posting.google.com>...
> > s_googl...@nedprod.com (Niall Douglas) wrote in message news:<opr83h12...@news.iol.ie>...
> > > On Fri, 4 Jun 2004 16:06:13 +0000 (UTC), llewelly
> > > <llewe...@xmission.dot.com> wrote:
> > >
> > Design without some implementation experience behind it is nearly
> > useless. However, implementation without design always ends up in a
> > horrible mess, if you take it too far (and most people do). It's a
> > judgement call how much time to assign to each activity, and in what
> > order, and my impression from your words is that you're not in the
> > habit of assigning enough time to design, or that you are in the habit
> > of putting it off until later than it should be
>
> When is it too late to design?

It's too late to design, when the basic features of a system are no
longer amenable to change. However, that's just the extreme case.
Design costs a lot more, the later it is done. It is trivially easy to
start designing something later than the best possible time. That's
because the best time to start design is very early in the process.
It's a mistake to design in too much detail too early, but it's
equally a mistake to design in too little detail. Since design is
boring, and coding is fun, the second type of mistake is far more
common than the first.

Bjarne Stroustrup

unread,
Jun 8, 2004, 10:44:01 PM6/8/04
to
davide...@yahoo.com (David Eng) wrote in message

> I am not joking. You are well underestimated C++ standard name.

> Professor Dog Schmidt can raise millions dollar for his CORBA C++
> implementation, I cannot image why C++ standard committee cannot raise
> the amount money. There are plenty resource here, considering
> National Science Foundation, government agencies, larg corporations.

> If you haven't try, how so sure that is joking? If C++ standard
> committee allow me to represent her to raise money, I am more glad to
> do so.

I don't think you are joking and I don't think you are trolling, but I
do think you might be underestimating the problem. The NSF, etc. funds
research in fairly well specified areas. Standardization of an "old"
language is a very long stretch.

In general, it is hard, but possible, to gain money for research -
especially for research that's seen as visionary. It is relatively
easy to get funding for development, provided you can promise a
specific product for a specific company next year; fundamental tools
such as languages and compilers are harder to fund, and note that
standards work benefits everybody - incl. the funder's competition. It
is even more easy to get funding for marketing to enhance the
perceived value of an existing product. Getting significant support
for a key part of the existing infrastructure is *hard*, getting
support for changing that is even harder.

I'm sure the committee would be happy to raise money if there was a
good reason to believe that the effort had a good chance of
succeeding. However, I doubt any of us would spend much of the
precious time left for standardization after our "day job" is done for
chasing a port of gold at the end of a rainbow. The obvious question
is: "what's your track record? how many millions have you raised? for
which purposes? and for what span of time?"


> You misunderstood my point (or I didn't represent clearly). You have
> to funding the project. Professors get prompted based on two facts:
> funding and research papers. When a professor get funding, he/she can
> hire PhD students to do the research and write papers based on the
> results of the research. C++ standard committee can select 3-5
> projects every year. Then let Universities, research institutions, or
> individuals to bid the project. This process is cheap because
> professors are paid by University. The funding is to pay for PhD
> student's tuitions. I bet there are a lot of professors will bid for
> the project. If a professor cannot get funding, he/she will never get
> prompted.

If we have funding, the next barrier must be faced: Professors and
students are rarely competent at judging, let alone making decisions
on the evolution of real-world tools. Most have never seen a
real-world program. Knowing in which direction to turn to make
progress without doing damage like a bull in a china shop is a real
problem, maybe the harderst in the context of C++ standardization.
Assuming the professors and students could get paid, then what? One
problem would be that they would have to consider what kinds of work
leads to PhDs and tenure. Development works doesn't do much for that.
Much of the essential work on the standard would destroy academic
careers. This leaves the work to industry people, tenured professors,
and the brave few who is wiling to bet their career.

-- Bjarne Stroustrup; http://www.research.att.com/~bs

It is loading more messages.
0 new messages