Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Three features you would like to have not in C++

12 views
Skip to first unread message

Michael S

unread,
Nov 19, 2002, 12:36:11 PM11/19/02
to
My top three.
1. All forms of static polymorphism: both functions with identical
names/ different parameters list and redefinitions of non-virtual
functions in the derived classes.
2. Operators overloading.
3. References.

All three are major contributors to "What you see is not what you get"
effect.
All three make understanding of code written by somebody else
difficult.
All three provide no obvious advantages with exception of "syntax
elegance".

---
[ comp.std.c++ is moderated. To submit articles, try just posting with ]
[ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
[ --- Please see the FAQ before posting. --- ]
[ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]

Pete Becker

unread,
Nov 19, 2002, 1:09:10 PM11/19/02
to
Michael S wrote:
>
> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.
> 2. Operators overloading.
> 3. References.
>
> All three are major contributors to "What you see is not what you get"
> effect.

Only if you don't read all the pertinent documentation. You're right
that they make it harder to write code by guessing. But that's a good
thing.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Toni Schornboeck

unread,
Nov 19, 2002, 4:43:10 PM11/19/02
to
Hi!

Michael S wrote:

> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.

OK, it is not easy to deal with that, but if you are able to deal with
it (and I'm far away from that), then I think it is great.

>
> 2. Operators overloading.

Nope! That is one great thing in C++. What is easier to understand?
result=num+num/2;
or
result=num.add(num.divide(2));

I think operator overloading makes it easier to understand the code (if
used right - of course you _can_ obfuscate code with operator
overloading, but who _would_ do it?)

>
> 3. References.

Why that? References are nice - you don't need to check if it is a
NULL-pointer.

>
> All three are major contributors to "What you see is not what you get"
> effect.
> All three make understanding of code written by somebody else
> difficult.
> All three provide no obvious advantages with exception of "syntax
> elegance".

I don't agree.
Maybe you're looking for Java?

Someone said sometime: "I prefer a language that allows me things I'll
never use over a language that forbid me things I may want to use".

--

yours sincerely
Toni Schornboeck (alias Shade)

Better to be hated for what you are, than loved for what you are not!

Thomas Mang

unread,
Nov 19, 2002, 4:43:24 PM11/19/02
to

Michael S schrieb:

> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.
> 2. Operators overloading.
> 3. References.
>
> All three are major contributors to "What you see is not what you get"
> effect.

Well, actually on a standard-conforming implementation, what you see is what
you get. You just have to look closely and carefully. (Or, put in a different
way, you might not get what you intuitivly expected on first sight)


>
> All three make understanding of code written by somebody else
> difficult.

Depends. They can make it both more readable, and more complicated.
It's a matter of personal taste.
For me, function overloading and references make code, generally speaking,
easier to understand, operator overloading in various cases more difficult.

>
> All three provide no obvious advantages with exception of "syntax
> elegance".

Like C++ (or any programming language) itself.
You can certainly write only '0's and '1's directly to memory/disk; then the
syntax will definitely not suffer from over-decoration.
But there might be people out there not casting '0' and '1' to the
hard-drive, because they consider this syntax a bit under-decorated. (I am
one of them).
You see, again a matter of personal taste (what's obvious for you is by no
means obvious to others).

You are a bit radical. I agree that all points can (and do in practice) cause
problems, nevertheless I wouldn't like to miss any of them.

What I'd like to see are some restrictions / refinements:

ad 1): redefinitions of non-virtual functions in derived class should either
be forbidden, or require a diagnostic message

ad 2): operator overloading should really simulate the behavior of the
built-in operators as close as possible, and not only be a synonym for a
function call.

E.g., currently operators && and || are of little (to zero) use, because you
cannot know which operands will be evaluated first.
So either we throw them out of overloadable operators, or we define them more
closely.

ad 3): some restrictions on binding references to r-values (or diagnostic
messages).

best regards,

Thomas

David B. Held

unread,
Nov 19, 2002, 7:56:01 PM11/19/02
to
Michael S wrote:

> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.
> 2. Operators overloading.
> 3. References.
>
> All three are major contributors to "What you see is not what you get"
> effect.
> All three make understanding of code written by somebody else
> difficult.
> All three provide no obvious advantages with exception of "syntax
> elegance".

> [...]

And all three make modern C++ programming possible. I take it you would
like to not have the STL as part of C++. You must hate iterators and
think functors are a waste of time. You probably have never used Spirit
or a smart pointer. I doubt you've written any serious generic code.
If you had, there would be no doubt in your mind that the three things
you name are not only useful, but absolutely necessary to produce the
latest, most powerful C++ libraries. I suggest you try Java or Delphi.

Dave

Allan W

unread,
Nov 19, 2002, 9:03:45 PM11/19/02
to
already...@yahoo.com (Michael S) wrote

> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.
> 2. Operators overloading.
> 3. References.

> All three are major contributors to "What you see is not what you get"
> effect.
> All three make understanding of code written by somebody else
> difficult.
> All three provide no obvious advantages with exception of "syntax
> elegance".

In EXACTLY the same sense (no more and no less) than the same thing
happens with constructors and destructors.

One of C++'s features is that it allows you to move some of the details
of operation "behind the scenes." When you create an object, it
automatically runs the "constructor" and when you destroy it, it
automatically runs the "destructor." The point here is to eliminate
the InitFoo- and DestroyFoo-type functions that were tediously required
for all Foo objects in C.

Similarly, operator overloading makes it possible to make user-defined
types look more like built-in types, and makes it possible to define
code that works differently depending on it's argument types.

Foo a(0);
a += 1;
a += 1L;

If we make the assumption that the designer of the Foo class was an
experienced C++ programmer who understands how operator overloading
can work, then we can further assume that operator += will always
perform some sort of addition. Furthermore, if operator +=(int) is
different than operator+=(long), it is because there is some good
reason to (perhaps efficiency). We just know that Foo's operator+=
functions will "do the right thing," and we don't have to understand
the difference to use it.

Sure, it's possible to misuse this feature:
Foo::operator+=(int) { /* Save the FOO to disk */ }
Foo::operator+=(long) { /* Write a report to the printer */ }
But then again it's possible to misuse C features as well:
Foo_AddInt(Foo*,int) { /* Save the FOO to disk */ }
Foo_AddLong(Foo*,long) { /* Write a report to the printer */ }
Neither of these is inherently worse than the other.

Sometimes it's difficult to pick up on the concept when you're new to
the language. There is a certain power to the language, which can
result in elegance or in shrapnel. Like any new language, you're
going to have to really understand the features before you can understand
when they should (and should not) be used.

On the other hand, if you've used C++ for a while now and still feel the
way you do -- my guess is you've been fighting the features for a long
time. Either try to ask yourself why they exist and how you can use
them effectively, or consider using some other language. A third
alternative is simply to avoid the features you don't personally care
for -- there are a lot of people that totally eschew using throw
specifications, and even a few that don't use exceptions at all.

Chen Gang

unread,
Nov 20, 2002, 1:41:41 AM11/20/02
to

"Toni Schornboeck" <to...@schornboeck.net>

> > 1. All forms of static polymorphism: both functions with identical
>
> OK, it is not easy to deal with that, but if you are able to deal with
> it (and I'm far away from that), then I think it is great.
>
no comment.

> >
> > 2. Operators overloading.
>
> Nope! That is one great thing in C++. What is easier to understand?

Totally agree.

> >
> > 3. References.
>
> Why that? References are nice - you don't need to check if it is a
> NULL-pointer.
>

Well it is just one face of a coin, yes, you don't have to check it, but you
have to make a instance of something before, even you don't need it at that
moment.

Regards
chengang
Alientools Software
www.alientools.com

John Nagle

unread,
Nov 20, 2002, 8:16:50 AM11/20/02
to
Michael S wrote:

> redefinitions of non-virtual functions in the derived classes.


That's the only item in the list that's a real problem.
Redefinition of a non-virtual function is usually an error,
and it's an error that can only be detected by looking at
code in two different files. Such cross-file inconsistencies
should be caught by the compiler whenever possible.

(We will now, of course, hear from people who insist
that this misfeature is needed in some obscure situation.)

> references

The main problem with references is that they came in
late. C++ code should use references more and pointers
less, but there's too much bad history. Even Strostrup
admits that "this" should have been a reference, not
a pointer.

I'd like to have "self" as equivalent to "(*this)",
and eventually deprecate "this". But it's not going to
happen. We'll all be using C# before C++ gets fixed.

John Nagle
Animats

Michael S

unread,
Nov 20, 2002, 2:02:44 PM11/20/02
to
all...@my-dejanews.com (Allan W) wrote in message news:<7f2735a5.0211...@posting.google.com>...

> already...@yahoo.com (Michael S) wrote
>
> > My top three.
> > 1. All forms of static polymorphism: both functions with identical
> > names/ different parameters list and redefinitions of non-virtual
> > functions in the derived classes.
> > 2. Operators overloading.
> > 3. References.
>
> > All three are major contributors to "What you see is not what you get"
> > effect.
> > All three make understanding of code written by somebody else
> > difficult.
> > All three provide no obvious advantages with exception of "syntax
> > elegance".
>
> In EXACTLY the same sense (no more and no less) than the same thing
> happens with constructors and destructors.
>
> One of C++'s features is that it allows you to move some of the details
> of operation "behind the scenes." When you create an object, it
> automatically runs the "constructor" and when you destroy it, it
> automatically runs the "destructor." The point here is to eliminate
> the InitFoo- and DestroyFoo-type functions that were tediously required
> for all Foo objects in C.
>
I was propmpted to include constructors as a number four on my list.
But:
Constructor *are* usefull no doubt. But their use better be severly
restricted. For example, "Allocate resource in constructor, free in
destructor" ideom appear to make live easier. In practice it brings a
lot of oddities. One is use of exceptions when what you really want is
return error code. Also this ideom promotes incomplete testing of
error conditions or ecessive use of flags to indicate if a resourse
was allocated or not (needed in corresponding destructor). What I
would like is restriction of constructurs to type of actions that
"can't fail". I realize that it is virtually impossible to introduce
such a restriction at the level of programming language.
Finally I came to conclusion that usefullness of constructors
outweights the induced problem.
BTW, I don't feel that bad about destructors.

> Similarly, operator overloading makes it possible to make user-defined
> types look more like built-in types, and makes it possible to define
> code that works differently depending on it's argument types.
>
> Foo a(0);
> a += 1;
> a += 1L;
>
> If we make the assumption that the designer of the Foo class was an
> experienced C++ programmer who understands how operator overloading
> can work, then we can further assume that operator += will always
> perform some sort of addition. Furthermore, if operator +=(int) is
> different than operator+=(long), it is because there is some good
> reason to (perhaps efficiency). We just know that Foo's operator+=
> functions will "do the right thing," and we don't have to understand
> the difference to use it.
>
> Sure, it's possible to misuse this feature:
> Foo::operator+=(int) { /* Save the FOO to disk */ }
> Foo::operator+=(long) { /* Write a report to the printer */ }
> But then again it's possible to misuse C features as well:
> Foo_AddInt(Foo*,int) { /* Save the FOO to disk */ }
> Foo_AddLong(Foo*,long) { /* Write a report to the printer */ }
> Neither of these is inherently worse than the other.

Nothing illustrates difficulty of the proper use of operators
overloading better than the first use of the concept in the
Stroustrup's book.
The use of "<<" and ">>" in the IO stream library has nothing to do
with shifts ! If Bjarne himself can't do it right, I doubt average C++
programmer can.

>
> Sometimes it's difficult to pick up on the concept when you're new to
> the language. There is a certain power to the language, which can
> result in elegance or in shrapnel. Like any new language, you're
> going to have to really understand the features before you can understand
> when they should (and should not) be used.
>
> On the other hand, if you've used C++ for a while now and still feel the
> way you do -- my guess is you've been fighting the features for a long
> time. Either try to ask yourself why they exist and how you can use
> them effectively, or consider using some other language. A third
> alternative is simply to avoid the features you don't personally care
> for -- there are a lot of people that totally eschew using throw
> specifications, and even a few that don't use exceptions at all.
>

I use C++ on daily basis for more than 10 years. I program almost
exclusively in C++ for "computer" targets. I also use C, mostly for
embedded targets. Of coarse, I try to avoid my "top three" in classes
I design. But I don't live in vacuum. Sometimes you are forced to use
operators overloading and references in your code.
Most important example is use of STL. STL is extremely useful piece of
code. Unfortunately you can't use it effectively without references,
because "const pointer" alternative for most functions is not provided
and "pass by value" alternative is often too expensive. For the most
time I was able to use STL without operators overloading, but
sometimes it is unnecessary difficult.
I avoid static polymorphism in my code, but sometimes I see it in the
third party libraries. It is misused in about 100% of the cases.

I asked myself many times why this features exist. The only answer was
- for sake of syntactic elegance. Unlike most other C++ features this
three provide no real advantages.

>using some other language
What other language would you recommend ? I don't know about something
as powerful and ubiquitous as C++, with the same ability to produce
low-level code when needed.

> there are a lot of people that totally eschew using throw
> specifications, and even a few that don't use exceptions at all.
>

It has nothing to do with a topic, but IMO using exceptions without
throw specifications is a bad practice. I don't see a good reason why
it is allowed.

Michael S

unread,
Nov 20, 2002, 2:04:25 PM11/20/02
to
dh...@codelogicconsulting.com ("David B. Held") wrote in message news:<utlj01n...@corp.supernews.com>...

You are right, generic code is difficult without static polymorphism
and/or operators overloading. References can be easily avoided in
generic code *unless* you rely on operators overloading.
But why is it ? Because template syntax was produced with static
polymorphism and operators overloading in mind. It is the reason
something like function names (not the same as pointer to function)
is not allowed in the function template parameters list. If static
polymorphism was not in the language, template syntax would be
different, enabling the same functionality.

Even with existing syntax you always can achieve your goals without my
"top two".
Simple "stub class" trick makes a good substitute for static
polymorphism.
Operators overloading is good in producing containers which work the
same with classes and basic types. Good thing ? Yes. Necessary ? No.
It's very easy for container user to wrap basic type in class
achieving the same result.
In both cases generic code provider needs a little cooperation from
the library user. I don't think it is too big deal.

Robert Klemme

unread,
Nov 20, 2002, 2:10:28 PM11/20/02
to

Michael S schrieb:

>
> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list

no. that's common in a lot of languages and is reasonable when
you think of default parameters.

> and redefinitions of non-virtual
> functions in the derived classes.

agreed. or: virtual should be the default vs. non default needs
a keyword. or: make compilers warn about overwriting a non
virtual method.

> 2. Operators overloading.

do you mean user defined operator overloading? then i'll agree.
otherwise not, because at least for numeric values there should
be some reasonable operators - and they are always overloaded
since they work with int, float etc.

> 3. References.

no. when you see a reference, you know exactly what you get.

> All three are major contributors to "What you see is not what you get"
> effect.

not quite.

> All three make understanding of code written by somebody else
> difficult.

partly.

> All three provide no obvious advantages with exception of "syntax
> elegance".

this will spark a lot of discussion...

regards

robert

Andy Sawyer

unread,
Nov 20, 2002, 2:10:49 PM11/20/02
to
In article <3DDB275F...@animats.com>,
on Wed, 20 Nov 2002 13:16:50 +0000 (UTC),
na...@animats.com (John Nagle) wrote:

> Michael S wrote:
>
> > redefinitions of non-virtual functions in the derived classes.
>
>
> That's the only item in the list that's a real problem.
> Redefinition of a non-virtual function is usually an error,

Redeinfition of a non-virtual _public_ function is _often_ an
error. If a function is private, then it's quite reasonable for a
dervied class to define a function with the same name - since the base
class function may not be documented. Or a function might be added
later to a base class which has the same name as a function in a
derived class. (I know that I, for one, don't get to see every class
which derives from classes I write - that doesn't stop me improving
the implementation of those classes.)

> and it's an error that can only be detected by looking at
> code in two different files. Such cross-file inconsistencies
> should be caught by the compiler whenever possible.

Compilers are free to issue a warning in this situation, and should
probably be encouraged to do so - it is, often, incorrect.

> (We will now, of course, hear from people who insist
> that this misfeature is needed in some obscure situation.)

Not surprising, since we've already heard from those that insist that
this feature is "usually an error" and is only "needed in some obscure
situation".

> The main problem with references is that they came in
> late. C++ code should use references more and pointers
> less, but there's too much bad history. Even Strostrup
> admits that "this" should have been a reference, not
> a pointer.

Once upon a time (before placement new), assignment to "this" was used
for user-defined allocation. That wouldn't have made sense if "this"
had been a reference.

> I'd like to have "self" as equivalent to "(*this)",
> and eventually deprecate "this". But it's not going to
> happen.

Because there are thousands (maybe tens, or hundreds of thousands) of
lines of code which rely on this having the semantics it's had ever
since C++ became widely available.

> We'll all be using C# before C++ gets fixed.

C++ is fine - but don't let that stop you.

Regards,
Andy S.
--
"Light thinks it travels faster than anything but it is wrong. No matter
how fast light travels it finds the darkness has always got there first,
and is waiting for it." -- Terry Pratchett, Reaper Man

Ken Alverson

unread,
Nov 20, 2002, 2:31:02 PM11/20/02
to
""Chen Gang"" <lion...@pub.wx.jsinfo.net> wrote in message
news:arf8ku$i12oi$1...@ID-138436.news.dfncis.de...

> > >
> > > 3. References.
> >
> > Why that? References are nice - you don't need to check if it is a
> > NULL-pointer.
> >
>
> Well it is just one face of a coin, yes, you don't have to check it,
but you
> have to make a instance of something before, even you don't need it at
that
> moment.

There are instances when a pointer is appropriate, and instances when a
reference is appropriate. Nobody ever said references should replace
pointers. Just that pointers should not replace references. The two
can coexist peacefully.

Ken

Ken Alverson

unread,
Nov 20, 2002, 2:31:14 PM11/20/02
to
"John Nagle" <na...@animats.com> wrote in message
news:3DDB275F...@animats.com...

> Michael S wrote:
>
> > redefinitions of non-virtual functions in the derived classes.
>
>
> That's the only item in the list that's a real problem.
> Redefinition of a non-virtual function is usually an error,
> and it's an error that can only be detected by looking at
> code in two different files. Such cross-file inconsistencies
> should be caught by the compiler whenever possible.
>
> (We will now, of course, hear from people who insist
> that this misfeature is needed in some obscure situation.)

C# went the route of using a keyword (new) to express that you are doing
such an overload intentionally. While it can't be made an error without
breaking somebody's code, a warning could be issued and a keyword (or
other syntax, similar to abstract methods) added that would supress the
warning.

Ken

John G Harris

unread,
Nov 20, 2002, 4:33:36 PM11/20/02
to
In article <f881b862.02112...@posting.google.com>, Michael S
<already...@yahoo.com> writes
<snip>

>Nothing illustrates difficulty of the proper use of operators
>overloading better than the first use of the concept in the
>Stroustrup's book.
>The use of "<<" and ">>" in the IO stream library has nothing to do
>with shifts ! If Bjarne himself can't do it right, I doubt average C++
>programmer can.
<snip>

The first time I saw the use of << as a shift operator I thought 'Why
are they using an i/o operator for bit twiddling ?'.

Do you consider these contrary views to be a strength or a weakness of
C++ ?

John
--
John Harris
mailto:jo...@jgharris.demon.co.uk

Andy Sawyer

unread,
Nov 20, 2002, 6:56:01 PM11/20/02
to
In article <arge9o$i0v$1...@eeyore.INS.cwru.edu>,
on Wed, 20 Nov 2002 19:31:14 +0000 (UTC),

K...@Alverson.net ("Ken Alverson") wrote:

> "John Nagle" <na...@animats.com> wrote in message
> news:3DDB275F...@animats.com...
> > Michael S wrote:
> >
> > > redefinitions of non-virtual functions in the derived classes.
> >
> >
> > That's the only item in the list that's a real problem.
> > Redefinition of a non-virtual function is usually an error,
> > and it's an error that can only be detected by looking at
> > code in two different files. Such cross-file inconsistencies
> > should be caught by the compiler whenever possible.
> >
> > (We will now, of course, hear from people who insist
> > that this misfeature is needed in some obscure situation.)
>
> C# went the route of using a keyword (new) to express that you are doing
> such an overload intentionally. While it can't be made an error without
> breaking somebody's code, a warning could be issued and a keyword (or
> other syntax, similar to abstract methods) added that would supress the
> warning.

I've got an idea - why don't we introduce the keyword "overload"...

Regards,
Andy S.

P.S. For those that don't know, "overload" used to be a C++ keyword.


--
"Light thinks it travels faster than anything but it is wrong. No matter
how fast light travels it finds the darkness has always got there first,
and is waiting for it." -- Terry Pratchett, Reaper Man

---

Claus Rasmussen

unread,
Nov 20, 2002, 9:47:40 PM11/20/02
to
John Nagle wrote:

>> redefinitions of non-virtual functions in the derived classes.
>
> That's the only item in the list that's a real problem.
> Redefinition of a non-virtual function is usually an error,
> and it's an error that can only be detected by looking at
> code in two different files. Such cross-file inconsistencies
> should be caught by the compiler whenever possible.
>
> (We will now, of course, hear from people who insist
> that this misfeature is needed in some obscure situation.)

Heh... You beat me to it:

template<typename T>
struct Base {
void f() { static_cast<T*>(this)->g(); }
void g() { some_default_implementation; }
};

struct User : Base<User> {
void g() { user_defined_override; }
};

struct AnotherUser : Base<AnotherUser> {
// No override of g() - uses Base<>::g
};

The idea in this example is to avoid defining virtual functions in the
base class and yet allow derived classes to override some behaviour in
the base class. It comes of use, when you define library-like containers
and won't burden _all_ of your users with virtual functions when only
a few need them.

-Claus

Chen Gang

unread,
Nov 21, 2002, 4:41:58 AM11/21/02
to

""Ken Alverson"" <K...@Alverson.net> wrote in message
news:argdsh$ge0$1...@eeyore.INS.cwru.edu...

> > > >
> > > > 3. References.
> > >
> > > Why that? References are nice - you don't need to check if it is a
> > > NULL-pointer.
> > >
> >
> > Well it is just one face of a coin, yes, you don't have to check it,
> but you
> > have to make a instance of something before, even you don't need it at
> that
> > moment.
>
> There are instances when a pointer is appropriate, and instances when a

I did not mean the instances of the pointers (or references) themselves.
I meaned the instances which the pointers (or references) point (or refer)
to.

Using a pointer, you only need to create the object (poited by the pointer)
when you need it.

> reference is appropriate. Nobody ever said references should replace
> pointers. Just that pointers should not replace references. The two
> can coexist peacefully.
>

Yes, I agree too.

Regards
Chen Gang

James Kanze

unread,
Nov 21, 2002, 12:03:45 PM11/21/02
to
already...@yahoo.com (Michael S) wrote in message
news:<f881b862.02111...@posting.google.com>...

> My top three.
> 1. All forms of static polymorphism: both functions with identical
> names/ different parameters list and redefinitions of non-virtual
> functions in the derived classes.
> 2. Operators overloading.

I'm just curious, but can you show me a language which has more than one
type, and doesn't have these two characteristics for the base types?
Where floating point and integral addition have different operators, for
example?

The problem isn't the polymorphism/overloading per se. I'd say that it
is almost necessary in the most obvious cases (like the + operator).
I'd also say that not extending it to user defined types is a show of
arrogance which simply isn't acceptable today: why should I be able to
write + for addition of double's, but not for the addition of Decimal?

The problem is that there is no consensus as to where to draw the line.
Pascal, for example, has a special operator, div, for integral division,
because the semantics are considered different than those of floating
point division (and I've seen more than one beginner surprised in other
languages by the fact that 1/3 is 0, so they may have a point). If your
arguement is that many, or even most, C++ programmers overdo it, I'd
agree. But the answer to that is education -- take that possibility for
obfuscation away from them, and they'll find another. ("A real
programmer can write Fortran in any language.")

> 3. References.

> All three are major contributors to "What you see is not what you get"
> effect.

I don't really agree concering references. For the others, how they are
used is often a major contributor to the "What does this mean to the
compiler?" guessing game. Part of this IS due to the language: there
are too many implicit type conversions, and the rules of overload
resolution are too complicated. But a large part is just due to poor
programming. I never bother to analyse how the compiler is going to
resolve my overloading. Because if it makes a difference, the functions
have different names, so that I (and not the compiler) do the choosing.

> All three make understanding of code written by somebody else
> difficult.

If the author wants you to be able to understand the code, he can write
code that can be understood in C++. If the author doesn't want you to
understand the code, he can write code which is incomprehensible in any
language. So we're really only talking about people who don't care,
right?

My experience is that people who don't care write bad code. Period, and
independantly of the language. There are a number of things in C++ that
I think could be improved, but none of them prevent writing readable
code.

> All three provide no obvious advantages with exception of "syntax
> elegance".

A for statement, instead of just using if and goto, is also just "syntax
elegance".

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

Michael S

unread,
Nov 21, 2002, 12:04:06 PM11/21/02
to
c...@cc-consult.dk (Claus Rasmussen) wrote in message news:<arhg7s$bbu$1...@sunsite.dk>...

> John Nagle wrote:
>
> >> redefinitions of non-virtual functions in the derived classes.
> >
> > That's the only item in the list that's a real problem.
> > Redefinition of a non-virtual function is usually an error,
> > and it's an error that can only be detected by looking at
> > code in two different files. Such cross-file inconsistencies
> > should be caught by the compiler whenever possible.
> >
> > (We will now, of course, hear from people who insist
> > that this misfeature is needed in some obscure situation.)
>
> Heh... You beat me to it:
>
> template<typename T>
> struct Base {
> void f() { static_cast<T*>(this)->g(); }
> void g() { some_default_implementation; }
> };
>
> struct User : Base<User> {
> void g() { user_defined_override; }
> };
>
> struct AnotherUser : Base<AnotherUser> {
> // No override of g() - uses Base<>::g
> };
>
> The idea in this example is to avoid defining virtual functions in the
> base class and yet allow derived classes to override some behaviour in
> the base class. It comes of use, when you define library-like containers
> and won't burden _all_ of your users with virtual functions when only
> a few need them.
>
> -Claus
>
Just an example of improper generic library design. The proper way to
do it:

template<typename T> struct Base {
void f() { static_cast<T*>(this)->g(); }
void g() { some_default_implementation; }
};

template<typename T> struct CommonCaseBase: public Base {
void g() { some_default_implementation; }
};

This resembles me abstract vs. non-abstract base classes patterns in
the classic OOP (non-generic) world.
What is a preferable way to build a hierarchy ?
A.
class VirtualBase {
virtual void foo() { some_default_implementation; }
}

B.
class AbstractVirtualBase {
virtual void foo() = 0;
}
class CommonCaseVirtualBase: public AbstractVirtualBase {
virtual void foo() { some_default_implementation; }
}

I think most OOP pundits will prefer the later design.

Fergus Henderson

unread,
Nov 21, 2002, 12:35:59 PM11/21/02
to
already...@yahoo.com (Michael S) writes:

>My top three.
>1. All forms of static polymorphism: both functions with identical
>names/ different parameters list and redefinitions of non-virtual
>functions in the derived classes.
>2. Operators overloading.
>3. References.

My top three:

1. Undefined behaviour.
2. Unspecified behaviour.
3. Implementation-defined behaviour.

;-)

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Michael S

unread,
Nov 21, 2002, 1:13:02 PM11/21/02
to
bob....@gmx.net (Robert Klemme) wrote in message news:<3DDBC261...@gmx.net>...

> Michael S schrieb:
> >
> > My top three.
> > 1. All forms of static polymorphism: both functions with identical
> > names/ different parameters list
>
> no. that's common in a lot of languages and is reasonable when
> you think of default parameters.
Quite a contrary. Functions with identical names/ different parameters
list would be reasonable and acceptable in case default parameters
disallowed. This two features together create a mess for reader.
Because default parameters *are* useful and identical names are not -
I would like identical names gone.

>
> > and redefinitions of non-virtual
> > functions in the derived classes.
>
> agreed. or: virtual should be the default vs. non default needs
> a keyword. or: make compilers warn about overwriting a non
> virtual method.
>
> > 2. Operators overloading.
>
> do you mean user defined operator overloading? then i'll agree.
> otherwise not, because at least for numeric values there should
> be some reasonable operators - and they are always overloaded
> since they work with int, float etc.
>

Only operator overloading I know is user defined operator overloading.
Probably, you want to say operator overloading for complex numbers
looks just about right. I don't think the very special and rarely used
case like this would justify inclusion of big complication like
operator overloading.

> > 3. References.
>
> no. when you see a reference, you know exactly what you get.
>

I mean use of references as parameters in function call, i.e. 98% of
their total use.
One nice thing in C relatively to Pascal and some older languages is
that when you see the function call you can immediately understand if
parameters are passed by value or by reference (pointer). In C++, due
to references, you have to consult function prototype in order to know
what's happening. The simplicity is gone.


> > All three are major contributors to "What you see is not what you get"
> > effect.
>
> not quite.
>
> > All three make understanding of code written by somebody else
> > difficult.
>
> partly.
>
> > All three provide no obvious advantages with exception of "syntax
> > elegance".
>
> this will spark a lot of discussion...
>

It's my primary goal.

Robert Klemme

unread,
Nov 21, 2002, 1:28:05 PM11/21/02
to

Michael S schrieb:

> > > 1. All forms of static polymorphism: both functions with identical
> > > names/ different parameters list
> >
> > no. that's common in a lot of languages and is reasonable when
> > you think of default parameters.
> Quite a contrary. Functions with identical names/ different parameters
> list would be reasonable and acceptable in case default parameters
> disallowed. This two features together create a mess for reader.
> Because default parameters *are* useful and identical names are not -
> I would like identical names gone.

seeing it this way i can agree.

> > do you mean user defined operator overloading? then i'll agree.
> > otherwise not, because at least for numeric values there should
> > be some reasonable operators - and they are always overloaded
> > since they work with int, float etc.
> >
> Only operator overloading I know is user defined operator overloading.
> Probably, you want to say operator overloading for complex numbers
> looks just about right. I don't think the very special and rarely used
> case like this would justify inclusion of big complication like
> operator overloading.

no. operator overloading is so common that most people don't
notice it. here you have an example of an overloaded + operator,
which you can find similarly in a lot of procedural and hybrid
languages.

int n = 4 + 5; // int + int
float y = 12.34 + 14.67; // float + float

you surely don't want to throw this out, do you?

> I mean use of references as parameters in function call, i.e. 98% of
> their total use.
> One nice thing in C relatively to Pascal and some older languages is
> that when you see the function call you can immediately understand if
> parameters are passed by value or by reference (pointer). In C++, due
> to references, you have to consult function prototype in order to know
> what's happening. The simplicity is gone.

i agree, that it's a tad more difficult. but this should not be
too difficult with a modern ide.

Niklas Matthies

unread,
Nov 21, 2002, 2:59:36 PM11/21/02
to
On 2002-11-20 19:02, Michael S <already...@yahoo.com> wrote:
[...]

> The use of "<<" and ">>" in the IO stream library has nothing to do
> with shifts !

Of course it does! You shift items onto a stream, or off a stream.

-- Niklas Matthies

Edward Diener

unread,
Nov 21, 2002, 5:58:09 PM11/21/02
to
"James Kanze" <ka...@gabi-soft.de> wrote in message
news:d6651fb6.02112...@posting.google.com...

> already...@yahoo.com (Michael S) wrote in message
> news:<f881b862.02111...@posting.google.com>...
> > My top three.
> > 1. All forms of static polymorphism: both functions with identical
> > names/ different parameters list and redefinitions of non-virtual
> > functions in the derived classes.
> > 2. Operators overloading.
> > 3. References.
>
> > All three are major contributors to "What you see is not what you get"
> > effect.
>
> > All three make understanding of code written by somebody else
> > difficult.
>
> If the author wants you to be able to understand the code, he can write
> code that can be understood in C++. If the author doesn't want you to
> understand the code, he can write code which is incomprehensible in any
> language. So we're really only talking about people who don't care,
> right?

Why the author could even do the unthinkable and actually document the class
separately from the source. Then one wouldn't have to study the source to
understanding how things actually work. But I probably shouldn't be
mentioning something so radical to programmers <g>.

I think that a good part of the reason why the original poster brought up
the 3 items which could lead to programmer abuse is that the idea of
actually documenting class ( or function ) usage is anathema to so many
programmers that the thought that such documentation could clarify the
issues mentioned was not even to be considered. The abuses to which any of
the 3 items lead are often reduced to insignificance by good documentation
and by programmers willing to read and understand it.

David B. Held

unread,
Nov 21, 2002, 6:00:36 PM11/21/02
to
Michael S wrote:

> [...]


> You are right, generic code is difficult without static polymorphism
> and/or operators overloading.

And why would you want to do things in a difficult way when a very
convenient and powerful way exists? If you enjoy difficulty, go back to
assembler.

> References can be easily avoided in generic code *unless* you rely on
> operators overloading.

And why should references be avoided? Guns kill people. Does that mean
we should all hunt with rocks and knives? Powerful tools have potential
for powerful abuse. I simply choose to use them carefully, and educate
myself on how they can be misused. And when I use them correctly and
safely, I get a lot more done in a lot less time with a lot less effort.

> But why is it? Because template syntax was produced with static


> polymorphism and operators overloading in mind.

And this is a bad thing?

> It is the reason something like function names (not the same as
> pointer to function) is not allowed in the function template
> parameters list.

I'm not sure what you mean. Do you mean this is not legal?

template <void F()>
void foo(F f)
{ f(); }

If so, is there a place where you can't accomplish this in another way?

> If static polymorphism was not in the language, template syntax would
> be different, enabling the same functionality.

What would the syntax be?

> Even with existing syntax you always can achieve your goals without my
> "top two". Simple "stub class" trick makes a good substitute for
> static polymorphism.

I'm not familiar with this. However, I'm wondering if you think this is
bad:

std::cout << "Hello, I have " << 5 << " objects weighing "
<< 2.5 << " lbs." << '\n';

and would prefer this form:

std::insert_string(std::cout, "Hello, I have");
std::insert_int(std::cout, 5);
std::insert_string(std::cout, " objects weighing ");
std::insert_float(std::cout, 2.5);
std::insert_string(std::cout, " lbs.");
std::insert_char(std::cout, '\n');

Or maybe you prefer an ellipsis function like printf? Who needs type
safety when you can avoid operator overloading and static polymorphism
in one fell swoop, eh?

> Operators overloading is good in producing containers which work the
> same with classes and basic types. Good thing ? Yes. Necessary ? No.

Useful? Extremely. operator+(int, int): Necessary? No. You can
accomplish the same thing with operator^ and operator&. "Necessary" is
a very poor metric for determining whether a feature *ought* to be in a
language, since almost no feature in any modern language is strictly
"necessary".

> It's very easy for container user to wrap basic type in class
> achieving the same result.

Actually, you can just add free functions like so:

int add(int x, int y) { return x + y; }
template <typename T>
T& deref(T* p) { return *p; }

template <typename T, typename Iter>
T accum(Iter begin, Iter end, T x)
{ while (begin != end) x = add(x, deref(begin++)); return x; }

But I find that much less readable than:

template <typename T, typename Iter>
T accum(Iter begin, Iter end, T x)
{ while (begin != end) x += *begin++; return x; }

> In both cases generic code provider needs a little cooperation from
> the library user. I don't think it is too big deal.

A) I wouldn't use a library that requires "cooperation" that I didn't
think ought to be necessary, especially if the reason for the
cooperation is because the library author is intentionally avoiding
useful language features. I would recommend that such an author
considering finding a language more suited to his liking (perhaps
Objective C?).

B) You could write code in a way that does not depend on "user
cooperation" in a good deal of instances. However, whether this is a
"good" way of doing things is highly questionable. Despite all the talk
about poorly overloaded operators and function names, I have never
personally failed to properly use a library because of confusion about
these issues. I'm sure others have, but it seems to me that the problem
is not nearly so prevalent as naysayers would have us believe.

> I asked myself many times why this features exist. The only answer was
> - for sake of syntactic elegance. Unlike most other C++ features this
> three provide no real advantages.

Performance of the executable is only one factor in the importance of a
language design. Readability and maintainability are just as important
for large, long-lived projects. If syntactic elegance makes code more
readable, then it offers a *real advantage*. I have the pleasure of
converting some C code to some C++ code, and changing names of the form:

void FooSomeType(...);
void FooSomeOtherType(...);
void FooThisType(...);
void FooThatType(...);

to the form I prefer:

void Foo(SomeType ...);
void Foo(SomeOtherType ...);
void Foo(ThisType ...);
void Foo(ThatType ...);

Having to encode the type in the name is quite cumbersome, IMO, and
isn't any different than identifier warts. My guess is that you just
love Hungarian notation.

> It has nothing to do with a topic, but IMO using exceptions without
> throw specifications is a bad practice. I don't see a good reason why
> it is allowed.

http://www.boost.org/more/lib_guide.htm#Exception-specification

Francis Glassborow

unread,
Nov 21, 2002, 6:13:50 PM11/21/02
to
>I use C++ on daily basis for more than 10 years.

If you avoid all forms of static polymorphism and operator overloading
then you are using a very constrained subset of C++. Static
polymorphism is extremely useful, for example in making clear when two
or more functions are intended to achieve the same result albeit with
different data.

Operator overloading used properly is an essential tool in reducing
errors. Being able to write source code that is visually similar to the
way that a domain expert would write the same thing on paper reduces the
errors that such specialist programmers make and makes their code more
expressive.

Your mention of embedded programming in C makes me suspect that most of
your coding is actually relatively small scale where you would be less
likely to experience the problems that your top three things are
designed to mitigate.


--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

Francis Glassborow

unread,
Nov 21, 2002, 6:24:41 PM11/21/02
to
>Only operator overloading I know is user defined operator overloading.
>Probably, you want to say operator overloading for complex numbers
>looks just about right. I don't think the very special and rarely used
>case like this would justify inclusion of big complication like
>operator overloading.


Rarely used by who? And that is just one instance of a numerical type
where operator overloading is natural and aids understanding.

There are many numerical types and semi-numerical types (such as dates
and time) where operators make sense. If you do not like them, do not
use them. Anyway this discussion is pretty sterile because none of the
things you listed are going to go away. The only thing worth considering
is tackling accidental overriding of non-virtual member functions.

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

---

David Abrahams

unread,
Nov 21, 2002, 8:50:42 PM11/21/02
to
dh...@codelogicconsulting.com ("David B. Held") writes:

> Michael S wrote:
>
>> My top three.
>> 1. All forms of static polymorphism: both functions with identical
>> names/ different parameters list and redefinitions of non-virtual
>> functions in the derived classes.
>> 2. Operators overloading.
>> 3. References.
>>
>> All three are major contributors to "What you see is not what you get"
>> effect.
>> All three make understanding of code written by somebody else
>> difficult.
>> All three provide no obvious advantages with exception of "syntax
>> elegance".
>> [...]
>
> And all three make modern C++ programming possible. I take it you
> would like to not have the STL as part of C++. You must hate
> iterators and think functors are a waste of time. You probably have
> never used Spirit or a smart pointer. I doubt you've written any
> serious generic code. If you had, there would be no doubt in your mind
> that the three things you name are not only useful, but absolutely
> necessary to produce the latest, most powerful C++ libraries. I
> suggest you try Java or Delphi.

I'm afraid even Java has operator overloading now <wink>.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

Ryjek

unread,
Nov 22, 2002, 4:45:27 AM11/22/02
to
James Kanze wrote:
> already...@yahoo.com (Michael S) wrote in message
> news:<f881b862.02111...@posting.google.com>...
>
>>My top three.
>>1. All forms of static polymorphism: both functions with identical
>>names/ different parameters list and redefinitions of non-virtual
>>functions in the derived classes.
>>2. Operators overloading.
>
>
> I'm just curious, but can you show me a language which has more than one
> type, and doesn't have these two characteristics for the base types?
> Where floating point and integral addition have different operators, for
> example?

Ocaml

James Kanze

unread,
Nov 22, 2002, 1:27:00 PM11/22/02
to
ry...@cox.net (Ryjek) wrote in message
news:<3DDDAA3E...@cox.net>...

> James Kanze wrote:
> > already...@yahoo.com (Michael S) wrote in message
> > news:<f881b862.02111...@posting.google.com>...

> >>My top three.
> >>1. All forms of static polymorphism: both functions with identical
> >>names/ different parameters list and redefinitions of non-virtual
> >>functions in the derived classes.
> >>2. Operators overloading.

> > I'm just curious, but can you show me a language which has more than
> > one type, and doesn't have these two characteristics for the base
> > types? Where floating point and integral addition have different
> > operators, for example?

> Ocaml

Does it have more than one type? B also had different operators, but
that was because it only had one data type, a machine word; it was up to
the programmer to remember what type the word contained, and use the
right operator.

That is, of course, what not having overloaded functions and operators
comes down to. You find that your long's are overflowing, so you pop in
a BigInteger class, and have to rewrite all of the code.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

Ryjek

unread,
Nov 22, 2002, 2:05:53 PM11/22/02
to
James Kanze wrote:
> ry...@cox.net (Ryjek) wrote in message
>>James Kanze wrote:
>>>I'm just curious, but can you show me a language which has more than
>>>one type, and doesn't have these two characteristics for the base
>>>types? Where floating point and integral addition have different
>>>operators, for example?
>
>>Ocaml
>
> Does it have more than one type? B also had different operators, but
> that was because it only had one data type, a machine word; it was up to
> the programmer to remember what type the word contained, and use the
> right operator.

You can definitely have more than one type in Ocaml. It is statically
typed and is one of the most popular functional languages.

James Kanze

unread,
Nov 22, 2002, 2:29:00 PM11/22/02
to
already...@yahoo.com (Michael S) wrote in message
news:<f881b862.02112...@posting.google.com>...

> bob....@gmx.net (Robert Klemme) wrote in message
> news:<3DDBC261...@gmx.net>...
> > Michael S schrieb:

> > > My top three.
> > > 1. All forms of static polymorphism: both functions with identical
> > > names/ different parameters list

> > no. that's common in a lot of languages and is reasonable when
> > you think of default parameters.

> Quite a contrary. Functions with identical names/ different parameters
> list would be reasonable and acceptable in case default parameters
> disallowed. This two features together create a mess for reader.

Why? Default parameters are just a limited form of static overloading.

> Because default parameters *are* useful and identical names are not -
> I would like identical names gone.

How are default parameters different than identical names?

> > > and redefinitions of non-virtual functions in the derived classes.

> > agreed. or: virtual should be the default vs. non default needs a
> > keyword. or: make compilers warn about overwriting a non virtual
> > method.

Only the last. Most functions shouldn't be virtual, so the default is
OK.

> > > 2. Operators overloading.

> > do you mean user defined operator overloading? then i'll agree.
> > otherwise not, because at least for numeric values there should be
> > some reasonable operators - and they are always overloaded since
> > they work with int, float etc.

> Only operator overloading I know is user defined operator overloading.

Nonsense. You can use / with ints, and you can use it with doubles.
And it has a different meaning in the two cases! That's overloading,
and it is arguably bad overloading (because of the different meaning).

Or are you arguing that the inventors of the language knew all of my
possible needs, and have provided all of the overloading I'll never
need. (That I'll never need decimal arithmetic, for example.)

> Probably, you want to say operator overloading for complex numbers
> looks just about right. I don't think the very special and rarely used
> case like this would justify inclusion of big complication like
> operator overloading.

Well, a fair percentage of commercial applications need decimal
arithmetic, and there are a lot of physical processes which need
complex. And of course, the need for extended integers or higher
precision floating point (or even rational numbers) isn't exactly rare.

Why should I be required to write:
net = brut.add( brut.multiply( vat ) ) ;
instead of:
net = brut + vat * brut ;
just because the local laws require that the rounding follow the rules
of decimal arithmetic, and not those of IEEE floating point?

In practice, of course, smart pointers (overloading -> and *) are
awfully useful. And since the semantics of C style arrays is so broken,
I'm awfully glad that we can implement real arrays, too (with operator
[]).

Finally, given the value semantics of C++, there is just no way we could
do without user overloads of the assignment operator.

> > > 3. References.

> > no. when you see a reference, you know exactly what you get.

> I mean use of references as parameters in function call, i.e. 98% of
> their total use. One nice thing in C relatively to Pascal and some
> older languages is that when you see the function call you can
> immediately understand if parameters are passed by value or by
> reference (pointer). In C++, due to references, you have to consult
> function prototype in order to know what's happening. The simplicity
> is gone.

There are two philosophies here:

- You only use const references, as an optimization (or to support
polymorphism or objects which don't copy. In this case, the use of
references is, for all pratical purposes, transparent to the user.

- Others accept non-const references as well, with various rules as to
when to use references, and when to use pointers -- a typical rule
would be to use a reference anytime a null pointer would be illegal.

My own tendancy is to avoid any case where the two rules would be
different: I don't make much use of out parameters, either as a pointer
or a reference. (On the other hand, when you need them, you need them.
To date, as Murphy would have it, the only times I've needed them have
been in Java:-).)

If you are worried about people calling a function without knowing what
its parameters are, or what it does with them, you can always use the
first rule. But frankly, if you have people calling functions without
knowing what the parameters should be, or what the semantics of the
function are (which will determine what it does with them), I doubt that
any such a simple rule will help. I'd look more in the direction of
adopting a good naming convention. (If there isn't a good, descriptive
name for the function, which eliminates all ambiguity in the mind of the
reader, then the function decomposition is probably wrong anyway. And
getting rid of references isn't going to do anything to help there.)

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

Thomas Mang

unread,
Nov 22, 2002, 3:52:09 PM11/22/02
to

Robert Klemme schrieb:eters.

>
> > and redefinitions of non-virtual
> > functions in the derived classes.
>
> agreed. or: virtual should be the default vs. non default needs
> a keyword. or: make compilers warn about overwriting a non
> virtual method.

Puh, I wouldn't like to have virtual functions as default.
Think how many functions are not virtual, and how many classes don't need a
single virtual function at all (overhead !!).
I think there are more non-virtual functions then virtual ones in a usual
component, so no need to force emphasis on the special case.

However, a good candidate for chaning default behavior is, IMHO, 'explicit':
I'd like to have 'explicit' removed, and introduce a keyword 'implicit'
instead if implicit conversion should work.

But it would break quite a lot of existing code......


>
>
> > 2. Operators overloading.
>
> do you mean user defined operator overloading? then i'll agree.
> otherwise not, because at least for numeric values there should
> be some reasonable operators - and they are always overloaded
> since they work with int, float etc.

One reason why operator overloading is confusing is that it's too often
implemented just because it's 'fancy', but it doesn't make much sense.
Here we are certainly again entering the wonderful field of "personal taste"
(when does it make sense, when not?).

My opinion is that unless a non-programmer wouldn't use mathematical
operators for a given expression, programmers shouldn't use them either.

E.g. Does it make sense to write (in real world, not C++ Code) "a + b",
where a and b are both of type, say complex / rational / matrix ?
For me, YES

E.g. Does it make sense to write "Hello" + ", " + "World"?
For me, NO.

I wouldn't have the slightest problem if std::basic_string<> had no operators
overloaded. (I know, some had).
I don't have much problem with overloaded operators in std:.basic_string<>
either.

But I have problem when some list component has operator += to add new items,
or car-representations have operator < overloaded to compare the respective
vehicle-length.
This doesn't make code better, it produces confusion and mess (my opinion, at
least)


best regards,

Thomas

Allan W

unread,
Nov 22, 2002, 7:55:59 PM11/22/02
to
> Michael S <already...@yahoo.com> wrote:
> > The use of "<<" and ">>" in the IO stream library has nothing to do
> > with shifts !

Niklas Matthies wrote


> Of course it does! You shift items onto a stream, or off a stream.

Kind of a shifty answer, isn't it?

Maybe it works better the other way 'round. In English, words are
"overloaded" (i.e. "If you peel the orange, you can keep the peel.").
We could say that you "stream" data in or out from a "stream." Then
we could note that you can also "stream" bits in an integral-type
variable...


:^}

Niklas Matthies

unread,
Nov 23, 2002, 7:11:45 PM11/23/02
to
On 2002-11-23 00:55, Allan W <all...@my-dejanews.com> wrote:
>> Michael S <already...@yahoo.com> wrote:
>> > The use of "<<" and ">>" in the IO stream library has nothing to do
>> > with shifts !
>
> Niklas Matthies wrote
>> Of course it does! You shift items onto a stream, or off a stream.
>
> Kind of a shifty answer, isn't it?
>
> Maybe it works better the other way 'round. In English, words are
> "overloaded" (i.e. "If you peel the orange, you can keep the peel.").
> We could say that you "stream" data in or out from a "stream." Then
> we could note that you can also "stream" bits in an integral-type
> variable...

I'm not sure I follow. I consider 'shift' to have basically the same
meaning whether applied to bits or to elements of a stream. Of course,
the right-hand operand of the integer shift operators specifies the
number of positions by which the sequence of bits is to be shifted, and
not the object whose value is to be placed at the freed position of the
sequence or the object in which the value shifted off the sequence is
to be stored. But what gives the shift operators their name, namely the
notion of a sequence of elements that can be shifted relative to the
beginning or end of the sequence, is quite the same.

-- Niklas Matthies

Francis Glassborow

unread,
Nov 23, 2002, 7:11:49 PM11/23/02
to
In article <3DDE94C2...@unet.univie.ac.at>, Thomas Mang
<a980...@unet.univie.ac.at> writes

>My opinion is that unless a non-programmer wouldn't use mathematical
>operators for a given expression, programmers shouldn't use them either.

IOWs operator overloading should provide code that does not surprise a
domain expert.


--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

---

James Kuyper Jr.

unread,
Nov 23, 2002, 7:12:03 PM11/23/02
to
Ryjek wrote:
> James Kanze wrote:
>
>> ry...@cox.net (Ryjek) wrote in message
>>
>>> James Kanze wrote:
>>>
>>>> I'm just curious, but can you show me a language which has more than
>>>> one type, and doesn't have these two characteristics for the base
>>>> types? Where floating point and integral addition have different
>>>> operators, for example?
>>>
>>
>>> Ocaml
>>
>>
>> Does it have more than one type? B also had different operators, but
>> that was because it only had one data type, a machine word; it was up to
>> the programmer to remember what type the word contained, and use the
>> right operator.
>
>
> You can definitely have more than one type in Ocaml. It is statically
> typed and is one of the most popular functional languages.

So, what are the two operators used in Ocaml for floating point and
integral addition? How are they used?

James Kuyper Jr.

unread,
Nov 23, 2002, 7:12:14 PM11/23/02
to

Michael S wrote:
> bob....@gmx.net (Robert Klemme) wrote in message
news:<3DDBC261...@gmx.net>...
>
>> Michael S schrieb:
....

>>>2. Operators overloading.
>>
>>do you mean user defined operator overloading? then i'll agree.
>>otherwise not, because at least for numeric values there should
>>be some reasonable operators - and they are always overloaded
>>since they work with int, float etc.
>>
>
> Only operator overloading I know is user defined operator overloading.

Other people have pointed out that, for instance, the '+' operator is
overloaded with two different meanings in the expressions 1+2 and 3.4+5.6.

> Probably, you want to say operator overloading for complex numbers
> looks just about right. I don't think the very special and rarely used
> case like this would justify inclusion of big complication like
> operator overloading.

Operator overloading is NOT rarely used, nor should it be. It can be
misused, but so can any other language feature. The basic consequence of
operator overloading is to make a user-defined type look like a built-in
type. That's a reasonable way to design a class, when it represents
something conceptually similar to one of the built-in types.

The basic C++ types are all numeric (even char and wchar_t, which really
shouldn't be). That means that any class which extends the concept of
"number" will quite reasonably overload the numeric operators + - * / %
and the corresponding assignment operators. If it represents an
extension of the concept of an integral number, it might also reasonably
overload binary &, plus | ++ -- ~ ^ ! >> << and the corresponding
assignment operators. There's lots of examples: complex, quaternions,
rationals, bignums, decimal types, fixed point types.

One of the C++ compound types is "pointer to T". A class which acts like
a pointer to a single object will, quite reasonably, overload operator*,
operator->, and operator->*. A class which acts like a pointer at an
element of an array will, quite reasonably, overload all of those plus
all of the operators needed to qualify as a random-access iterator. When
you consider all the problems which real C++ pointers have, it's not at
all odd that smart pointers of various types are very popular.

Another compound type is "array of T". A container class that supports
indexed access will, quite reasonably, overload operator[].

Another compound type is "function returning T". A function object will,
quite reasonably, overload operator(). Function objects are one of the
key things that make the STL so powerful.

The '+' operator for strings? The << and >> operators for iostreams? I
have to agree that they don't make as much sense. But they are convenient.

Mirek Fidler

unread,
Nov 25, 2002, 12:28:53 PM11/25/02
to
> My top three:
>
> 1. Undefined behaviour.
> 2. Unspecified behaviour.
> 3. Implementation-defined behaviour.

Exactly. There is way too much undefined behaviour in C++ that is in
fact consistently well defined on most 32/64 bit platforms. Sometimes you
have to use it. Sometimes you can get significant performance gain.
Sometimes you can even get better and safer code.

Recently I was proposing creating some Common (or Flat) C++
'superstandard' that would define most important 'undefined' and/or
'implementation defined' parts of C++. It would be great help for creating
portable software, and it would also not require too much work on compilers
side - in fact, none other than checking (and eventually to create some
header(s) - depends how far this Common C++ would go...).

Note that this is not to be meant to replace ISO C++. In fact, Common
C++ compiler would be required to be ISO compliant, PLUS some of
undefined/unspecified/implementation-defined aspects would be defined
(sometimes in form of macros and types - e.g.
PLATFORM_ALLOWS_UNALIGNED_COPY, PLATFORM_LITTLE_ENDIAN, integral ptr_t type
and so on).

Mirek

Randy Maddox

unread,
Nov 25, 2002, 12:35:03 PM11/25/02
to
francis.g...@ntlworld.com (Francis Glassborow) wrote in message news:<I4zz4aL1...@robinton.demon.co.uk>...

>
> There are many numerical types and semi-numerical types (such as dates
> and time) where operators make sense. If you do not like them, do not
> use them. Anyway this discussion is pretty sterile because none of the
> things you listed are going to go away. The only thing worth considering
> is tackling accidental overriding of non-virtual member functions.
>
> --
> Francis Glassborow ACCU
> 64 Southfield Rd
> Oxford OX4 1PA +44(0)1865 246490
> All opinions are mine and do not represent those of any organisation
>

And most compilers, if you don't turn the warnings off, will generate
a warning if you do hide a base class function. So even this is not a
burning issue.

Randy.

Allan W

unread,
Nov 25, 2002, 1:29:39 PM11/25/02
to
already...@yahoo.com (Michael S) wrote
> bob....@gmx.net (Robert Klemme) wrote
> > Michael S schrieb:

> > > 2. Operators overloading.
> >
> > do you mean user defined operator overloading? then i'll agree.
> > otherwise not, because at least for numeric values there should
> > be some reasonable operators - and they are always overloaded
> > since they work with int, float etc.
> >
> Only operator overloading I know is user defined operator overloading.
> Probably, you want to say operator overloading for complex numbers
> looks just about right. I don't think the very special and rarely used
> case like this would justify inclusion of big complication like
> operator overloading.

Just about every language has operator overloading, including the ones
that don't allow users to do it. Consider FORTRAN, where
I = J + K
and
X = Y + Z
are both legal. The first one (normally) adds two integers and copies
the result to a third integer. The second one (normally) does the same
thing with "real" (floating-point) variables.

If you forgot about language-defined operator overloading, you've
demonstrated one of the key points about how it works when it's
properly designed. What does this do?
Customer c, d, e;
// ...
c = d + e;
If the average programmer doesn't quickly and easily understand what it
means to add two customers, then it shouldn't be done via operator
overloading. Instead, you should be using some function with a more
appropriate name. Contrast that to:
TimeInterval t, u, v;
// ...
t = u + v;
Here it's obvious that we're adding together two time intervals (perhaps
the time elapsed on different "laps" on a track).

> > > 3. References.
> >
> > no. when you see a reference, you know exactly what you get.
> >
> I mean use of references as parameters in function call, i.e. 98% of
> their total use.
> One nice thing in C relatively to Pascal and some older languages is
> that when you see the function call you can immediately understand if
> parameters are passed by value or by reference (pointer). In C++, due
> to references, you have to consult function prototype in order to know
> what's happening. The simplicity is gone.

Usually when one makes this point, they include an example such as:
Foo f;
// ...
Bar(f); // Does this update f? How would we know?

But this isn't fair -- this reasoning has the same problem as the
complaint about operator overloading. Real functions aren't named Bar;
real classes aren't named Foo. (If they are, someone should be sent
back to school.)

When arguments are read-only (99% of the time), the difference between
pass-by-value and pass-by-const-ref should be an implementation detail.
Real functions should not modify their by-ref arguments unless their
name says that they will do so.

Customer c;
// ...
MyDatabase.ReadNewBalance(c); // Probably reads customer's balance from
// a database, then modifies c.balance

MyReport.Add(c); // Probably adds to the report
// without modifying c at all

MyOtherObject.Baz(c); // Probably need to do three things:
// 1. Find out who created a class with a member named Baz,
// and who created a variable named MyOtherObject
// 2. Shoot the guilty parties (or just fire them, or at
// the very least reprimand them)
// 3. Add documentation to this statement, to explain what
// it means to Baz the MyOtherObject. Better yet, fix the
// code so it's more clear.

> > ---
> > [ comp.std.c++ is moderated. To submit articles, try just posting with ]
> > [ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
> > [ --- Please see the FAQ before posting. --- ]
> > [ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]
>
> ---
> [ comp.std.c++ is moderated. To submit articles, try just posting with ]
> [ your news-reader. If that fails, use mailto:std...@ncar.ucar.edu ]
> [ --- Please see the FAQ before posting. --- ]
> [ FAQ: http://www.jamesd.demon.co.uk/csc/faq.html ]

Apparently, removing the newsgroup .sig in a reply, is unfashionable?

Allan W

unread,
Nov 25, 2002, 1:30:03 PM11/25/02
to
eldi...@earthlink.net ("Edward Diener") wrote

> I think that a good part of the reason why the original poster brought up
> the 3 items which could lead to programmer abuse is that the idea of
> actually documenting class ( or function ) usage is anathema to so many
> programmers that the thought that such documentation could clarify the
> issues mentioned was not even to be considered.

I think you're right.

> The abuses to which any of
> the 3 items lead are often reduced to insignificance by good documentation
> and by programmers willing to read and understand it.

Like certain types of woodland creatures, such beings are hard to find.
In my experience, they tend to be the people that have the most experience.
Which presents a dilemma: If a company has 8 highly-experienced programmers
and 7 highly-important projects, chances are that 6 of the 7 projects have
only one such programmer. He may be willing to read documentation that
doesn't exist. He may be willing to write documentation that nobody else
will ever read. Think of what would happen if you somehow got hold of a
cellular phone in the 12th century... you can make a call, but nobody will
answer.

On the other hand, if a class or library is constantly sending people
digging through the documentation, it may be a sign that the class or
library is not well designed. Some reusable code is by it's very nature
complex, and there's only so much you can do to alleviate it. But I
have seen codes where there was no consistent naming strategy, where
argument order seems to have been created by rolling dice, where data
types have to be converted for no good reason. (Microsoft Windows API
might be a good example here -- they did a lot of things right, but
some of the calls to the OS are at best exotic.)

If you can design your class or library so that the arguments come in
an obvious, natural order and type, then it will be easier to use and
people won't be hitting the documentation so often. This combination
often goes together, not by coincidence.

Francis Glassborow

unread,
Nov 25, 2002, 1:36:50 PM11/25/02
to
In article <3DDE94C2...@unet.univie.ac.at>, Thomas Mang
<a980...@unet.univie.ac.at> writes
>However, a good candidate for chaning default behavior is, IMHO, 'explicit':
>I'd like to have 'explicit' removed, and introduce a keyword 'implicit'
>instead if implicit conversion should work.
>
>But it would break quite a lot of existing code......

Which is exactly why it is that way round. We new it was a second best
choice, but the constraints of existing code (and compilers) made the
first choice a non-starter.


--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

---

David Abrahams

unread,
Nov 26, 2002, 2:31:47 AM11/26/02
to
all...@my-dejanews.com (Allan W) writes:

>> The abuses to which any of
>> the 3 items lead are often reduced to insignificance by good documentation
>> and by programmers willing to read and understand it.
>
> Like certain types of woodland creatures, such beings are hard to find.
> In my experience, they tend to be the people that have the most experience.
> Which presents a dilemma: If a company has 8 highly-experienced programmers
> and 7 highly-important projects, chances are that 6 of the 7 projects have
> only one such programmer. He may be willing to read documentation that
> doesn't exist. He may be willing to write documentation that nobody else
> will ever read.

And worse, while he is busily documenting his work, the rest of the
company is cranking out reams and reams of undocumented junk at four
times the pace.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

---

James Kanze

unread,
Nov 26, 2002, 1:01:48 PM11/26/02
to
c...@volny.cz ("Mirek Fidler") wrote in message
news:<arl8ht$2jh0$1...@news.vol.cz>...
> > My top three:

> > 1. Undefined behaviour.
> > 2. Unspecified behaviour.
> > 3. Implementation-defined behaviour.

> Exactly. There is way too much undefined behaviour in C++ that is
> in fact consistently well defined on most 32/64 bit platforms.
> Sometimes you have to use it. Sometimes you can get significant
> performance gain. Sometimes you can even get better and safer code.

Most of the undefined/unspecified behavior that bothers me isn't that
which is effectively defined on most 32/64 bit platforms. If I am only
porting to those platforms, I know what I can do in addition to what the
standard says, and I do it.

A typical example of where one might get bitten by undefined/unspecified
behavior is the business with creating two auto_ptr in a single
expression that came up lately in clc++m. It results in an easy mistake
to make, and the resulting code will often work, even under extensive
testing. Until some future version of the compiler, in which the
optimizer works slightly differently. More generally, although I find
that code that depends on things like order of evaluation is poorly
written, in reality, we all write poor code from time to time; it is
nice to know that whatever it does in the tests on one machine, it will
do on all of them.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

John Nagle

unread,
Nov 26, 2002, 1:37:59 PM11/26/02
to
James Kanze wrote:

> A typical example of where one might get bitten by undefined/unspecified
> behavior is the business with creating two auto_ptr in a single
> expression that came up lately in clc++m. It results in an easy mistake
> to make, and the resulting code will often work, even under extensive
> testing. Until some future version of the compiler, in which the
> optimizer works slightly differently.


The fact that all attempts at auto_ptr to date don't
work right indicates a real problem with C++. Many bright
people have beaten their heads against that wall.
You just can't do "smart pointers" right in C++ without
a little more support in the language.

Doing this in a way that's backwards compatible is
really tough, and may be impossible. I've made two
tries at it so far, and each one had problems.
(Search the archives for the "strict C++" thread.)

It's clear that some pointers have different
attributes than others. Some languages make a "strong"
or "weak" pointer distinction. An "owner" and "user"
distinction is also useful. It's not clear how to
integrate such attributes into C++, but it's worth
thinking about again. What we've got isn't working.

John Nagle
Animats

Edward Diener

unread,
Nov 26, 2002, 2:27:07 PM11/26/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uk7j1x...@boost-consulting.com...

> all...@my-dejanews.com (Allan W) writes:
>
> >> The abuses to which any of
> >> the 3 items lead are often reduced to insignificance by good
documentation
> >> and by programmers willing to read and understand it.
> >
> > Like certain types of woodland creatures, such beings are hard to find.
> > In my experience, they tend to be the people that have the most
experience.
> > Which presents a dilemma: If a company has 8 highly-experienced
programmers
> > and 7 highly-important projects, chances are that 6 of the 7 projects
have
> > only one such programmer. He may be willing to read documentation that
> > doesn't exist. He may be willing to write documentation that nobody else
> > will ever read.
>
> And worse, while he is busily documenting his work, the rest of the
> company is cranking out reams and reams of undocumented junk at four
> times the pace.

Which will take forty times the future effort to do anything with since
nobody understands what already has been done and the few programmers who
originally did it have long since left for other projects, departments, or
companies.

That's why documentation is so important. In the short term it takes more
time but in the long term it saves much more time than it originally took to
create the decent documentation. With better and better systems for creating
documentation, there is little reason for not creating it when any issues of
future reusability or extensibility are involved.

All three issues originally brought up seem to assume that one can only
determine the usage of implementation details by looking at source code, and
not even implementation source at that, whereas decent documentation will
tell one what the meaning and signature of function calls and/or operator
syntax are in each instance.

David B. Held

unread,
Nov 26, 2002, 2:39:30 PM11/26/02
to
"John Nagle" <na...@animats.com> wrote in message
news:3DE3BEA7...@animats.com...
> [...]

> The fact that all attempts at auto_ptr to date don't
> work right indicates a real problem with C++.

I disagree.

> Many bright people have beaten their heads against that wall.
> You just can't do "smart pointers" right in C++ without a little more
> support in the language.

You just can't have a universal smart pointer. I think that's the lesson
to be learned. And if you look at the evolution of smart pointers, I
think that is what history teaches us. But just because auto_ptr<>
isn't the end-all doesn't mean that it is worthless, or that C++ is
defective. Just because a hammer can't remove torx nuts doesn't
make it ineffective.

> Doing this in a way that's backwards compatible is
> really tough, and may be impossible. I've made two
> tries at it so far, and each one had problems.
> (Search the archives for the "strict C++" thread.)

It's certainly possible that auto_ptr<> will have to make way for
pointers that are invested with more experience. I would say
already that a lot of people use newer smart pointer types, which
happen to work pretty well for the domains in which they were
intended.

> It's clear that some pointers have different attributes than others.
> Some languages make a "strong" or "weak" pointer distinction. An
> "owner" and "user" distinction is also useful. It's not clear how to
> integrate such attributes into C++, but it's worth thinking about again.
> What we've got isn't working.

I think the Boost smart pointers cover a lot of ground. Do you think
they are insufficient?

Dave

Thomas Mang

unread,
Nov 26, 2002, 4:41:17 PM11/26/02
to
>
> >However, a good candidate for chaning default behavior is, IMHO, 'explicit':
> >I'd like to have 'explicit' removed, and introduce a keyword 'implicit'
> >instead if implicit conversion should work.
> >
> >But it would break quite a lot of existing code......
>
> Which is exactly why it is that way round. We new it was a second best
> choice, but the constraints of existing code (and compilers) made the
> first choice a non-starter.

I already guessed that being the reason having 'explicit' and not 'implicit'.

Interestingly, it would be quite easy to write a program that takes as input some
C++ code with the 'explicit' syntax, and produce as output C++ code that has
'implicit' syntax.

If such a tool would be required to be shipped as part of the implementation,
wouldn't it be then considerable to change from 'explicit' to 'implicit'?
This way, the transition should be straightforward. Much more straightforward
than, for example, introducing the std-namespace (okay, assume not everybody just
put 'using namespace std' to his code), or the new header-syntax, or....

Such a general strategy would it also make much easier for other parts of the
language to become changed without disturbing current code.


best regards,

Thomas

Ken Hagan

unread,
Nov 27, 2002, 11:48:11 AM11/27/02
to
"Thomas Mang" <a980...@unet.univie.ac.at> wrote...

>
> Interestingly, it would be quite easy to write a program that takes
> as input some C++ code with the 'explicit' syntax, and produce as
> output C++ code that has 'implicit' syntax.

I already have a tool that spells "implicit" as "//lint !e1931". It's
not standard but it works (until I change my LINT vendor).

I'm not sure where the boundary lies between compiler QoI, translation
tools of the sort you describe, and analysis packages like lint.
However, the C++ committee are unlikely to take on the job of designing
either of the latter two, since in the past they have placed even the
first outside their remit.

Randy Maddox

unread,
Nov 27, 2002, 11:49:58 AM11/27/02
to
da...@boost-consulting.com (David Abrahams) wrote in message news:<uk7j1x...@boost-consulting.com>...

> all...@my-dejanews.com (Allan W) writes:
>
> >> The abuses to which any of
> >> the 3 items lead are often reduced to insignificance by good documentation
> >> and by programmers willing to read and understand it.
> >
> > Like certain types of woodland creatures, such beings are hard to find.
> > In my experience, they tend to be the people that have the most experience.
> > Which presents a dilemma: If a company has 8 highly-experienced programmers
> > and 7 highly-important projects, chances are that 6 of the 7 projects have
> > only one such programmer. He may be willing to read documentation that
> > doesn't exist. He may be willing to write documentation that nobody else
> > will ever read.
>
> And worse, while he is busily documenting his work, the rest of the
> company is cranking out reams and reams of undocumented junk at four
> times the pace.

But can they really crank out junk any faster? We always hear about
the cost of taking the time to document code, but how does the time to
document compare to the time to get cranked out junk working?
Documentation written either prior to or concurrently with code often
seems to lead to code that works right out of the gate rather than
junk that compiles but requires massive debugging and testing time to
get to work. Think about XP with writing the tests before the code.
Does that slow things down, or does it lead to better code that works
sooner?

Randy.

Thomas Mang

unread,
Nov 27, 2002, 12:49:43 PM11/27/02
to

Ken Hagan schrieb:

> "Thomas Mang" <a980...@unet.univie.ac.at> wrote...
> >
> > Interestingly, it would be quite easy to write a program that takes
> > as input some C++ code with the 'explicit' syntax, and produce as
> > output C++ code that has 'implicit' syntax.
>
> I already have a tool that spells "implicit" as "//lint !e1931". It's
> not standard but it works (until I change my LINT vendor).
>
> I'm not sure where the boundary lies between compiler QoI, translation
> tools of the sort you describe, and analysis packages like lint.
> However, the C++ committee are unlikely to take on the job of designing
> either of the latter two, since in the past they have placed even the
> first outside their remit.

Well, that doesn't surprise me, but the only thing the standard had to define
would be something like:
"Implementations are required to support tools that can convert C++-code
conforming to standard xx into code conforming to standard yy."


This would certainly not produce top-notch code of the current standard,
becaue how should such a tool reliably detect where an 'int' was used to
simulate a 'bool' before 'bool' was introduced.

I think not infrequently things aren't they way even the
standardization-commitee would like them to be, because it would break to
much existing code. But if the existing code can be converted by a mechanical
way into new (better) code, why not take advantage of this?


best regards,

Thomas

David B. Held

unread,
Nov 27, 2002, 2:43:54 PM11/27/02
to
"Randy Maddox" <rma...@isicns.com> wrote in message
news:8c8b368d.02112...@posting.google.com...
> [...]

> Documentation written either prior to or concurrently with code often
> seems to lead to code that works right out of the gate rather than
> junk that compiles but requires massive debugging and testing time to
> get to work. Think about XP with writing the tests before the code.
> Does that slow things down, or does it lead to better code that works
> sooner?

If XP is the benchmark, then I'm gonna stop documenting. ;) Maybe it's
just the Home Edition that's a bit flaky, but I've seen a few oddball
"features"
here and there.

Dave

Allan W

unread,
Nov 27, 2002, 10:07:35 PM11/27/02
to
> > >> The abuses to which any of the 3 items lead are often reduced
> > >> to insignificance by good documentation
> > >> and by programmers willing to read and understand it.

> > all...@my-dejanews.com (Allan W) writes:
> > > Like certain types of woodland creatures, such beings are hard
> > > to find. In my experience, they tend to be the people that have
> > > the most experience.
> > > Which presents a dilemma: If a company has 8 highly-experienced
> > > programmers and 7 highly-important projects, chances are that 6
> > > of the 7 projects have only one such programmer. He may be
> > > willing to read documentation that doesn't exist. He may be
> > > willing to write documentation that nobody else will ever read.

> "David Abrahams" <da...@boost-consulting.com> wrote


> > And worse, while he is busily documenting his work, the rest of the
> > company is cranking out reams and reams of undocumented junk at four
> > times the pace.

And getting bonuses for high productivity. And ignoring their bug
counts (or else getting more bonuses for having fixed so many bugs)!

eldi...@earthlink.net ("Edward Diener") wrote

> Which will take forty times the future effort to do anything with since
> nobody understands what already has been done and the few programmers who
> originally did it have long since left for other projects, departments, or
> companies.

If you wrote that much junk, wouldn't you want to move on before anyone
realized it?

> That's why documentation is so important. In the short term it takes more
> time but in the long term it saves much more time than it originally took to
> create the decent documentation. With better and better systems for creating
> documentation, there is little reason for not creating it when any issues of
> future reusability or extensibility are involved.

I've worked in two types of companies:
* Places where management just doesn't understand that. "I'm paying you
to write programs, not to read!"

* Places where management does understand that intellectually, but
fails to put it into practice. "Documentation is good, but we've
got a deadline in 3 months. You can work on the documentation once
it's shipping." Bad enough if it's true, but usually this turns
into a broken promise: "We've got three more projects planned, we
don't have time for that now..."

I understand that there is a third type of company. One that spends a
few days every month investigating problems, not looking for someone to
blame but looking for ways to avoid them in the future. One that spends
even more time searching for new techniques to make future development
bug-free, easy to maintain, and productive. One that consistently
conducts code walkthroughs -- not just two sessions at the beginning
of the project, but also a week before deadline. One that doesn't
consider a program finished just because it passed one test case, but
instead asks the programmer to document it, create more test cases,
and even look for ways to generalize the code so it can move to a
company library. I understand that these DO exist, but I fear that I
can't fly there unless Tinkerbell sprinkles me with Pixie dust and I
think of a happy thought...

> All three issues originally brought up seem to assume that one can only
> determine the usage of implementation details by looking at source code, and
> not even implementation source at that, whereas decent documentation will
> tell one what the meaning and signature of function calls and/or operator
> syntax are in each instance.

Far too often, that assumption is true.

Nobody would ever try to sell a commercial library publicly without
documentation -- but very very often, the documentation that comes with
such libraries is inaccurate and/or indecipherable. Ironically, very
often the most useful part of such documentation is the sample code --
which is usually written for only one language (not always C++). Or if
it does have examples in multiple languages, sometimes the C++ code is
just wrong.

That phenomenon isn't just from tiny no-name companies, either -- I
once had to code to a rather obscure product from IBM. The documentation
had excellent typefaces and nice binding, and that's the highest praise
I could give it. The documentation tended to have explanations like this:

BAR FOOTOBAR(FOO F) ' Basic
BAR FOOTOBAR(FOO F); ' C / C++
Description: Converts a Foo into a Bar
Inputs: F -- the Bar to be converted
Outputs: BAR -- the converted FOO
Details: Converts a Foo into a Bar
Exceptions: If the Frannistan bit is set, the conversion uses
the Heimlich maneuver; otherwise, it uses the Frozz
factor, as detailed in "Inputs" above.
Failures:
Example:
Basic:
FOO F("Yes", 10, "RB\+1")
BAR B
B = FOOTOBAR(F)
C / C++
#include "FOOBAR.H"
struct FOO F;
struct BAR B;
void test() {
char yes[4];
strcpy(yes,"Yes\0");
char rb1[6];
strcpy(rb1,"RB\+1\0");
F = FOO(yes, 10, rb1);
B = FOOTOBAR(F);
}

As you can see, what this REALLY demonstrates is a fundamental lack
of understanding of either C or C++ -- especially when it comes to
C-style strings. Notice in particular how they always explicitly add
the \0 to the end of every string literal, and how they never pass
this string to any function except strcpy. This was consistent
throughout the whole manual. This is IBM we're talking about!

But that's not even the worst thing about this documentation. About
90% of their "example" code gave examples only of function-call
notation. If you don't already understand what a Foo and Bar are,
and why you'd want to convert between them -- then the documentation
above is useless.

I believe this to be typical of a LOT of otherwise-high-quality packages
available today. "State of the art" leaves something to be desired.

Allan W

unread,
Nov 27, 2002, 10:08:30 PM11/27/02
to
> "Randy Maddox" <rma...@isicns.com> wrote

> > Documentation written either prior to or concurrently with code often
> > seems to lead to code that works right out of the gate rather than
> > junk that compiles but requires massive debugging and testing time to
> > get to work. Think about XP with writing the tests before the code.
> > Does that slow things down, or does it lead to better code that works
> > sooner?

Apparently you're talking about a software discipline called "eXtreme
Programming," often called XP.

dh...@codelogicconsulting.com ("David B. Held") wrote


> If XP is the benchmark, then I'm gonna stop documenting. ;) Maybe it's
> just the Home Edition that's a bit flaky, but I've seen a few oddball
> "features" here and there.

Does the smiley indicate you're joking? Obviously you're talking about
Microsoft Windows XP.

David B. Held

unread,
Nov 28, 2002, 3:03:11 PM11/28/02
to
"Allan W" <all...@my-dejanews.com> wrote in message
news:7f2735a5.02112...@posting.google.com...
> [...]

> Apparently you're talking about a software discipline called "eXtreme
> Programming," often called XP.

Ah, never heard of it. That makes more sense.

> dh...@codelogicconsulting.com ("David B. Held") wrote
> > If XP is the benchmark, then I'm gonna stop documenting. ;) Maybe
> > it's just the Home Edition that's a bit flaky, but I've seen a few
oddball
> > "features" here and there.
>
> Does the smiley indicate you're joking? Obviously you're talking about
> Microsoft Windows XP.

Yes, and the idea that it would set a standard of coding productivity and
quality (or any M$ OS, for that matter) is not something I could ever say
with a straight face.

Dave

James Kanze

unread,
Nov 28, 2002, 3:04:20 PM11/28/02
to
all...@my-dejanews.com (Allan W) wrote in message
news:<7f2735a5.0211...@posting.google.com>...
[...]

> > That's why documentation is so important. In the short term it takes
> > more time but in the long term it saves much more time than it
> > originally took to create the decent documentation.

I find that it saves time in the short term as well.

> > With better and better systems for creating documentation, there is
> > little reason for not creating it when any issues of future
> > reusability or extensibility are involved.

> I've worked in two types of companies:
> * Places where management just doesn't understand that. "I'm paying you
> to write programs, not to read!"

> * Places where management does understand that intellectually, but
> fails to put it into practice. "Documentation is good, but we've
> got a deadline in 3 months. You can work on the documentation once
> it's shipping." Bad enough if it's true, but usually this turns
> into a broken promise: "We've got three more projects planned, we
> don't have time for that now..."

Most such companies will still do a few tests before delivering, and
expect a minimum amount of functionality. If the goal is just getting
code to compile, documentation takes time. If the goal is working code,
good documentation reduces the time to delivery; having to think about
the problem, the solution, and how it is implemented, results in many
less errors going into unit tests and integration. The result is the
correctly documented code is ready for delivery in less time than
undocumented code.

This supposes, of course, that you write the documentation before you
start coding. But that is the usual practice, I believe -- at least,
I've never heard of anyone bothering to write documentation after.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

James Kanze

unread,
Nov 28, 2002, 3:04:30 PM11/28/02
to
rma...@isicns.com (Randy Maddox) wrote in message
news:<8c8b368d.02112...@posting.google.com>...

> da...@boost-consulting.com (David Abrahams) wrote in message
> news:<uk7j1x...@boost-consulting.com>...

> > all...@my-dejanews.com (Allan W) writes:

> > >> The abuses to which any of the 3 items lead are often reduced to
> > >> insignificance by good documentation and by programmers willing
> > >> to read and understand it.

> > > Like certain types of woodland creatures, such beings are hard to
> > > find. In my experience, they tend to be the people that have the
> > > most experience. Which presents a dilemma: If a company has 8
> > > highly-experienced programmers and 7 highly-important projects,
> > > chances are that 6 of the 7 projects have only one such
> > > programmer. He may be willing to read documentation that doesn't
> > > exist. He may be willing to write documentation that nobody else
> > > will ever read.

> > And worse, while he is busily documenting his work, the rest of the
> > company is cranking out reams and reams of undocumented junk at four
> > times the pace.

> But can they really crank out junk any faster? We always hear about
> the cost of taking the time to document code, but how does the time to
> document compare to the time to get cranked out junk working?

Who says that the junk has to work?

If it has to work, of course, you're right.

> Documentation written either prior to or concurrently with code often
> seems to lead to code that works right out of the gate rather than
> junk that compiles but requires massive debugging and testing time to
> get to work. Think about XP with writing the tests before the code.
> Does that slow things down, or does it lead to better code that works
> sooner?

I think that the "prior to or concurrently" is the key part.

There are a number of aspects of XP where I'm sceptical (including the
part the good tests can replace the requirements specs), but one thing
is certain: anything you do *before* writing the code which helps to
improve your understanding of what you need to write is positive, and
will end up reducing you time to delivery.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

James Kuyper Jr.

unread,
Dec 1, 2002, 1:13:40 PM12/1/02
to

===================================== MODERATOR'S COMMENT:

I'm approving this article, since it and the rest of
the thread grew out of an on-topic discussion, but
this whole thread has drifted very far from C++
standardization.


===================================== END OF MODERATOR'S COMMENT
Randy Maddox wrote:
....


> But can they really crank out junk any faster? We always hear about
> the cost of taking the time to document code, but how does the time to
> document compare to the time to get cranked out junk working?
> Documentation written either prior to or concurrently with code often
> seems to lead to code that works right out of the gate rather than
> junk that compiles but requires massive debugging and testing time to
> get to work. Think about XP with writing the tests before the code.
> Does that slow things down, or does it lead to better code that works
> sooner?

That depends upon the size of the project. A small project can often be
completed quicker by just "cranking it out", than by doing prior or
concurrent documentation. The full benefit of documentation will
generally not be realized until the second or third re-write of a program.

Also, care must be taken when choosing documentation requirements. Badly
chosen requirements can make documentation more trouble than it's worth.
I've seen requirements that made design documents practically
equivalent to the final code, except for the use of a different and less
formal syntax. Writing such documentation has all the disadvantages of
prematurely writing code, with none of the advantages (such as the
ability to try compiling it to perform consistency checks).

David Thompson

unread,
Dec 2, 2002, 3:25:50 AM12/2/02
to
Allan W <all...@my-dejanews.com> wrote :
....

> Just about every language has operator overloading, including the ones
> that don't allow users to do it. Consider FORTRAN, where
> I = J + K
> and
> X = Y + Z
> are both legal. The first one (normally) adds two integers and copies
> the result to a third integer. The second one (normally) does the same
> thing with "real" (floating-point) variables.
>
Or integer and real arrays/sections respectively in F90/95.
And for "normally" meaning under the traditional/default implicit
rules, which always could be overridden and now can be replaced;
this is "normal" in the observational but not prescriptive sense.

But Fortran (no longer all caps) 90 or 95 *does* provide
overloading of operators for types not defined intrinsically,
primarily class types (much as in C++, except Fortran
a bit confusingly calls them "derived types") and even
creation of new operator "name"s in the .FOO. syntax.
And according to a recent draft, F2K should add
polymorphism (again much as in C++, except classes
which will be inherited from must be marked, while
methods that will be polymorphic/virtual don't.)

--
- David.Thompson 1 now at worldnet.att.net

John Nagle

unread,
Dec 4, 2002, 3:09:01 PM12/4/02
to
Mirek Fidler wrote:

>>My top three:
>>
>>1. Undefined behaviour.
>>2. Unspecified behaviour.
>>3. Implementation-defined behaviour.
>>
>
> Exactly. There is way too much undefined behaviour in C++ that is in
> fact consistently well defined on most 32/64 bit platforms. Sometimes you
> have to use it. Sometimes you can get significant performance gain.
> Sometimes you can even get better and safer code.


Java goes further in that direction, defining the sizes of all
its types, and defining its floating point as IEEE floating point.
This is convenient and consistent, but makes Java support on some
exotic architectures difficult.

C was standardized back when the 36-bit PDP-10 people were
still influential in academia. That's the main reason ANSI
C defines integer arithmetic the way it does. C can be implemented
on ones-complement and signed-magnitude machines of various
unusual word lengths, and it has been.

But today, very few machines are anything other than
twos-complement byte-oriented. There are digital signal
processors with 24, 48, and 56 bit word sizes in volume
production, but I don't know of anything in current manufacture
that isn't byte-oriented, with the possible exception of the
Unisys ClearPath servers.

Yes, Unisys is still selling
machines that will run 36-bit UNIVAC 1100/2200 programs
and 48-bit Burroughs MCP programs. Those are the
world's oldest living computer architectures. The 36-bit
ones-complement data formats are nearly compatible back to
the UNIVAC 1103, from 1952, and strictly compatible with
the UNIVAC 1108, circa 1968.

So that's why C++ still supports 9-bit bytes and odd
sized words. Whether it should is another question.

John Nagle
Animats

Peter Dimov

unread,
Dec 4, 2002, 3:40:57 PM12/4/02
to
francis.g...@ntlworld.com (Francis Glassborow) wrote in message news:<T7FFYzEv...@robinton.demon.co.uk>...

> In article <3DDE94C2...@unet.univie.ac.at>, Thomas Mang
> <a980...@unet.univie.ac.at> writes
> >However, a good candidate for chaning default behavior is, IMHO, 'explicit':
> >I'd like to have 'explicit' removed, and introduce a keyword 'implicit'
> >instead if implicit conversion should work.
> >
> >But it would break quite a lot of existing code......
>
> Which is exactly why it is that way round. We new it was a second best
> choice, but the constraints of existing code (and compilers) made the
> first choice a non-starter.

It is a "third best" choice (from a certain perspective.) The second
best choice is to introduce both 'explicit' and 'implicit', leaving
the door open to compilers to issue a warning when neither is present.

Pete Becker

unread,
Dec 4, 2002, 3:42:43 PM12/4/02
to

===================================== MODERATOR'S COMMENT:
C++-standard content dead, please end this subthread.


===================================== END OF MODERATOR'S COMMENT

John Nagle wrote:
>
> Java goes further in that direction, defining the sizes of all
> its types, and defining its floating point as IEEE floating point.
> This is convenient and consistent, but makes Java support on some
> exotic architectures difficult.
>

And often inefficient on common architectures. The floating point
requirements, in particular, impose a significant slowdown on Intel
processors, which aren't allowed to use their native 80-bit floating
point math internally -- they have to run the math code in a much slower
mode. And then there's the char problem, now that Unicode doesn't fit in
the Java-mandated 16 bit character type. Oh, well, computer cycles are
cheap. May as well waste them.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Francis Glassborow

unread,
Dec 4, 2002, 4:00:00 PM12/4/02
to
In article <3DE3BEA7...@animats.com>, John Nagle
<na...@animats.com> writes

> The fact that all attempts at auto_ptr to date don't
>work right indicates a real problem with C++. Many bright
>people have beaten their heads against that wall.
>You just can't do "smart pointers" right in C++ without
>a little more support in the language.


No, what you cannot do is have something that meets the exact design
specifications that got dumped on auto_ptr. I do not want to spend time
rehashing history but there are plenty of ways of meeting the needs but
not via a single smart pointer type (well I do not count Andrei's policy
based smart pointer as a single type)


--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

---

Pete Becker

unread,
Dec 4, 2002, 4:04:44 PM12/4/02
to

===================================== MODERATOR'S COMMENT:
You can make that point without turning the thread into Java bashing. :-)


===================================== END OF MODERATOR'S COMMENT

Pete Becker wrote:
>
> John Nagle wrote:
> >
> > Java goes further in that direction, defining the sizes of all
> > its types, and defining its floating point as IEEE floating point.
> > This is convenient and consistent, but makes Java support on some
> > exotic architectures difficult.
> >
>
> And often inefficient on common architectures. The floating point
> requirements, in particular, impose a significant slowdown on Intel
> processors, which aren't allowed to use their native 80-bit floating
> point math internally -- they have to run the math code in a much slower
> mode. And then there's the char problem, now that Unicode doesn't fit in
> the Java-mandated 16 bit character type. Oh, well, computer cycles are
> cheap. May as well waste them.
>
> ===================================== MODERATOR'S COMMENT:
> C++-standard content dead, please end this subthread.
>
> ===================================== END OF MODERATOR'S COMMENT

[top post corrected]

Well, reject this if you must <g>, but the C++ content is that there are
risks in rigid requirements. There are good reasons for the flexibility
that C and C++ offer here.

Alan Griffiths

unread,
Dec 5, 2002, 6:44:28 PM12/5/02
to
dh...@codelogicconsulting.com ("David B. Held") wrote in message news:<uu7j85q...@corp.supernews.com>...

> "John Nagle" <na...@animats.com> wrote in message
> news:3DE3BEA7...@animats.com...
> > [...]
> > The fact that all attempts at auto_ptr to date don't
> > work right indicates a real problem with C++.
>
> I disagree.
>
> > Many bright people have beaten their heads against that wall.
> > You just can't do "smart pointers" right in C++ without a little more
> > support in the language.
>
> You just can't have a universal smart pointer.

You've missed the original point JK made. The problem is that the
following will not work for *any* smart pointer that implements
ownership (not auto_ptr, not boost, not arglib, ...).

typdef any_smart_ptr<int> smart_ptr;

void f(smart_ptr foo, smart_ptr bar);

int main() {
f(smart_ptr(new int()), smart_ptr(new int())); // Possible leak
}

--
Alan Griffiths
http://octopull.demon.co.uk/

David B. Held

unread,
Dec 6, 2002, 11:29:38 AM12/6/02
to
"Alan Griffiths" <al...@octopull.demon.co.uk> wrote in message
news:4c498b63.0212...@posting.google.com...
> > [...]

> > > Many bright people have beaten their heads against that wall.
> > > You just can't do "smart pointers" right in C++ without a little
> > > more support in the language.
> >
> > You just can't have a universal smart pointer.
>
> You've missed the original point JK made. The problem is that the
> following will not work for *any* smart pointer that implements
> ownership (not auto_ptr, not boost, not arglib, ...).
>
> typdef any_smart_ptr<int> smart_ptr;
>
> void f(smart_ptr foo, smart_ptr bar);
>
> int main() {
> f(smart_ptr(new int()), smart_ptr(new int())); // Possible leak
> }

The question is whether it *ought* to work. Granted, it makes smart
pointers fundamentally different from raw pointers, but isn't the issue
really sequence points? And isn't it true that sequence points (or the
lack thereof) can cause problems in all kinds of situations? So we
just have to be careful about relying on the order of execution. I
don't see that as a failure to properly handle smart pointers. It just
means that C++ does things in a way where we have to be careful.
I guess I don't see it as a problem any more than not being able
to write:

foo(i++, ++i);

C++ is not a functional language, so we can't always compose
functions in what may seem are obvious ways.

Dave

James Kanze

unread,
Dec 6, 2002, 11:30:06 AM12/6/02
to
al...@octopull.demon.co.uk (Alan Griffiths) wrote in message
news:<4c498b63.0212...@posting.google.com>...

> dh...@codelogicconsulting.com ("David B. Held") wrote in message
> news:<uu7j85q...@corp.supernews.com>...
> > "John Nagle" <na...@animats.com> wrote in message
> > news:3DE3BEA7...@animats.com...
> > > [...]
> > > The fact that all attempts at auto_ptr to date don't
> > > work right indicates a real problem with C++.

> > I disagree.

> > > Many bright people have beaten their heads against that wall. You
> > > just can't do "smart pointers" right in C++ without a little more
> > > support in the language.

> > You just can't have a universal smart pointer.

> You've missed the original point JK made. The problem is that the
> following will not work for *any* smart pointer that implements
> ownership (not auto_ptr, not boost, not arglib, ...).

> typdef any_smart_ptr<int> smart_ptr;

> void f(smart_ptr foo, smart_ptr bar);

> int main() {
> f(smart_ptr(new int()), smart_ptr(new int())); // Possible leak
> }

Right.

There are two possible solutions: garbage collection, and introducing a
sequence point between the evaluation of parameters (without imposing an
order on that evalutation). Personally, I think both would be a good
idea. Garbage collection simplifies the programmers work, and in many
cases, would make the additional smart pointers superfluous. Many cases
isn't all cases, however; even with garbage collection, there will be
cases where the smart pointer solution is preferable, and I would like
to see it made to work as well.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

---

Peter Dimov

unread,
Dec 6, 2002, 11:32:00 AM12/6/02
to
al...@octopull.demon.co.uk (Alan Griffiths) wrote in message news:<4c498b63.0212...@posting.google.com>...
> dh...@codelogicconsulting.com ("David B. Held") wrote in message news:<uu7j85q...@corp.supernews.com>...
> > You just can't have a universal smart pointer.
>
> You've missed the original point JK made. The problem is that the
> following will not work for *any* smart pointer that implements
> ownership (not auto_ptr, not boost, not arglib, ...).
>
> typdef any_smart_ptr<int> smart_ptr;
>
> void f(smart_ptr foo, smart_ptr bar);
>
> int main() {
> f(smart_ptr(new int()), smart_ptr(new int())); // Possible leak
> }

This is a specific instance of a general problem. Unmanaged resources
of any kind - not just memory - and exceptions (or any nontrivial
control flow, for that matter) do not mix well. Smart pointers are not
to blame.

Tom Hyer

unread,
Dec 9, 2002, 4:39:11 AM12/9/02
to
kuy...@wizard.net ("James Kuyper Jr.") wrote in message news:<3DDF75A4...@wizard.net>...
> Ryjek wrote:
> > James Kanze wrote:
> >
> >> ry...@cox.net (Ryjek) wrote in message
> >>
> >>> James Kanze wrote:
> >>>
> >>>> I'm just curious, but can you show me a language which has more than
> >>>> one type, and doesn't have these two characteristics for the base
> >>>> types? Where floating point and integral addition have different
> >>>> operators, for example?
> >>>
>
> >>> Ocaml
> >>
>
> So, what are the two operators used in Ocaml for floating point and
> integral addition? How are they used?

In OCaml, the operators + and * are not overloaded -- to do so would
wreak havoc in its type recognition system. If I type

let double x = x + x;;

the OCaml compiler recognizes that double is a function which takes an
int and returns an int. If + could also add floats, then the type of
'double' could not be deduced. Thus I need to write another function

let double_float x = x +. x;; [sic]

So +. is the floating-point addition operator, and the compiler can
tell that double_float expects a float argument. Similarly, '*' has
type int->int and '*.' has type float->float.


Now, to C/C++ programmers, this looks awful. Even when we recall the
hours we've lost typing "n / 365" instead of "n / 365.0", it's just
not worth it. Even the thrill of having the environment respond "fun
double: int -> int" instead of having to be told doesn't make it
worthwhile. Also, it binds you tightly to having only one int type
(OCaml uses 32 bits) and one float type (64 bits), and you can just
forget about complex.

Despite this amazing wart, OCaml is fairly popular (as functional
languages go) and can apparently be used to write real software. For
a (C-hostile) discussion of ML typing, see
http://perl.plover.com/yak/typing/.

-- Tom Hyer

Tom Hyer

unread,
Dec 10, 2002, 11:11:38 PM12/10/02
to
thoma...@ubsw.com (Tom Hyer) wrote in message news:<26fabedc.02120...@posting.google.com>...
> ... Similarly, '*' has

> type int->int and '*.' has type float->float.

Of course this is wrong. '*' and '+' have type int->int->int, and
'*.' and '+.' have type float->float->float. Apologies to anyone who
may have been confused.

0 new messages