Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A safer/better C++?

578 views
Skip to first unread message

Peter Most

unread,
Dec 6, 2005, 1:17:34 PM12/6/05
to
Hello everybody,

I would like to get some opinions about whether it would make sense to
change the C++ design principles, so it would become safer for beginner?

I'm assuming there are two kind of beginner:
1) A beginner who, through reading and education, will eventually become an
advanced developer and maybe even an expert developer. For this discussion
I simple call them "novice" developer.
2) A beginner who doesn't read or educate himself and simply muddles
through. I would like to call this kind of developer "ignorant" developer.

Now both kind of developers start to program and sooner or later they start
to use arrays. Both will probably at some point start to use a vector. I'm
also assuming that both developers have difficulties with the index and the
designer of vector obviously also assumed this, why else would there an
at()?

The novice developer reads more about vector and learns, that there are two
ways to access the elements. A fast but unsafe way via the operator[] and a
version which is little slower but safer way via at(). He starts to use at()
because it is a lot safer and he doesn't need that last bit of speed.

The ignorant developer also starts to use vector but doesn't learn that
there a two interface, so he still uses operator[] and still has problems
and bugs with the index access.

So my question is now, would it not have been better to reverse the semantic
of the two versions, meaning operator[] checks the index and throws an
exception and at() would allow an unchecked access? I'm assuming of course
that either beginner starts to use the vector via the operator[]. If this
assumption would be correct, then a beginner would start with the safe
version and maybe, if he needs that additional speed, later use the at()
version.

This is of course easiest done in libraries, because then the language
doesn't have to be changed. But I wonder whether it would make sense also
in a new language design. It wouldn't be C++ anymore, because it would break
backwards compatibility, but maybe a worthy C++ successor?

To give some background on this thoughts:

I'm developing in C++ for about 15 years and in recent years I got the
feeling, that C++ is loosing ground to other languages like Java and C#. I
simply love the sheer power I have with C++ and if I shoot myself in the
foot, so to speak, I simply learn to aim better, but I also learned, that
for beginners it is often to difficult to learn. There are simply to many
mistakes a beginner can make and only trough education does he learn to
avoid those beginner mistakes. Mind you, I'm only referring to unsafe
constructs which led to crashes. I'm not talking about supposedly complex
features like operator overloading or multiple inheritance. If a beginner
doesn't understand these features he probably won't use them. But because
of the forementioned ignorant developers, the language is getting a bad
reputation for it's "unsafety" which also influences decision makers to
abandon C++ projects. I read of course Bjarne Stroustrup's excellent
posting (http://www.research.att.com/~bs/blast.html) where he addresses
some of the issues. But the questions remains:

Should C++ changed to be a more beginner friendly language?

I'm very interested in hearing your opinions.

Kind regards

Peter


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Alf P. Steinbach

unread,
Dec 7, 2005, 9:07:48 AM12/7/05
to
* Peter Most:

> I read of course Bjarne Stroustrup's excellent
> posting (http://www.research.att.com/~bs/blast.html) where he addresses
> some of the issues.

The Internet Gods refuse to honor my request to see that page,
teleporting me instead to ...

Argh, on the third try, to copy the URL I get redirected to, they
finally let me through to that page, but, there is evidently something
that doesn't quite work as it should.

I meant to write,

"did you mean <url:
http://www.research.att.com/~bs/new_learning.pdf>?",

but now, well, I still think that's relevant! ;-)


PS: Regarding Bjarne's comments on the statement "C++ sucks", I think he
may have misunderstood. All those who state that "C++ sucks" simply
mean that they're drawn inexorably towards C++. That's how good it is!

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

David Abrahams

unread,
Dec 7, 2005, 9:05:56 AM12/7/05
to
Peter Most <Peter...@gmx.de> writes:

> Hello everybody,
>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?
>
> I'm assuming there are two kind of beginner:
> 1) A beginner who, through reading and education, will eventually become an
> advanced developer and maybe even an expert developer. For this discussion
> I simple call them "novice" developer.
> 2) A beginner who doesn't read or educate himself and simply muddles
> through. I would like to call this kind of developer "ignorant" developer.
>
> Now both kind of developers start to program and sooner or later they start
> to use arrays. Both will probably at some point start to use a vector. I'm
> also assuming that both developers have difficulties with the index and the
> designer of vector obviously also assumed this, why else would there an
> at()?
>
> The novice developer reads more about vector and learns, that there are two
> ways to access the elements. A fast but unsafe way via the operator[] and a
> version which is little slower but safer way via at(). He starts to use at()
> because it is a lot safer and he doesn't need that last bit of speed.

Safer how? It won't make an unintentional out-of-bounds access any
more correct. Incorrect code can always have unintended (i.e. unsafe)
behaviors.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Bob Hairgrove

unread,
Dec 7, 2005, 9:02:43 AM12/7/05
to

It's an interesting idea, but I think the way it is now is more
intuitive. Arrays do no checking when using [], so why should
vector::operator[]? The fact that at() doesn't exist for arrays
implies (to me) that it is more special and has enhanced functionality
(i.e. does checking of the index).

>This is of course easiest done in libraries, because then the language
>doesn't have to be changed. But I wonder whether it would make sense also
>in a new language design. It wouldn't be C++ anymore, because it would break
>backwards compatibility, but maybe a worthy C++ successor?
>
>To give some background on this thoughts:
>
>I'm developing in C++ for about 15 years and in recent years I got the
>feeling, that C++ is loosing ground to other languages like Java and C#. I
>simply love the sheer power I have with C++ and if I shoot myself in the
>foot, so to speak, I simply learn to aim better, but I also learned, that
>for beginners it is often to difficult to learn. There are simply to many
>mistakes a beginner can make and only trough education does he learn to
>avoid those beginner mistakes. Mind you, I'm only referring to unsafe
>constructs which led to crashes. I'm not talking about supposedly complex
>features like operator overloading or multiple inheritance. If a beginner
>doesn't understand these features he probably won't use them. But because
>of the forementioned ignorant developers, the language is getting a bad
>reputation for it's "unsafety" which also influences decision makers to
>abandon C++ projects. I read of course Bjarne Stroustrup's excellent
>posting (http://www.research.att.com/~bs/blast.html) where he addresses
>some of the issues. But the questions remains:
>
>Should C++ changed to be a more beginner friendly language?

No ... isn't that why we have Java, C# and Visual Basic? ;)

One might argue that knives should be outlawed because you can cut
yourself with them.

--
Bob Hairgrove
NoSpam...@Home.com

Sektor van Skijlen

unread,
Dec 7, 2005, 9:10:56 AM12/7/05
to
Dnia 6 Dec 2005 13:17:34 -0500, Peter Most skrobie:
> Hello everybody,

> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?

Regarding what you mean in the "beginner" term, I think it would, but with
your proposition it wouln't either.

The problem of safety in C++ is not only in case of indexing vector out of
bounds, but also other things, like accessing objects in freed memory. I'd
rather see the solution in making appropriate high-level environment in which
the program in C++ runs so that every invalid access will end up with an
exception. Cheching bounds in [] can be added by an implementation as well, on
a direct request.

This change in Standard terms would change also existing programs, which is
not what "non-beginners" want.

> To give some background on this thoughts:

> I'm developing in C++ for about 15 years and in recent years I got the
> feeling, that C++ is loosing ground to other languages like Java and C#. I
> simply love the sheer power I have with C++ and if I shoot myself in the
> foot, so to speak, I simply learn to aim better, but I also learned, that
> for beginners it is often to difficult to learn. There are simply to many
> mistakes a beginner can make and only trough education does he learn to
> avoid those beginner mistakes. Mind you, I'm only referring to unsafe
> constructs which led to crashes. I'm not talking about supposedly complex
> features like operator overloading or multiple inheritance. If a beginner
> doesn't understand these features he probably won't use them. But because
> of the forementioned ignorant developers, the language is getting a bad
> reputation for it's "unsafety" which also influences decision makers to
> abandon C++ projects. I read of course Bjarne Stroustrup's excellent
> posting (http://www.research.att.com/~bs/blast.html) where he addresses
> some of the issues. But the questions remains:

> Should C++ changed to be a more beginner friendly language?

This thought is still worth taking into consideration. But still the only way
that leads to a safe usage of C++ is learning. So if a developer is assigned
to a work in a production environment, there must be someone to look at this
code and detect such problematic usages. I heard, for example, that inability
to overload operators in Java is its advantage. Why? Because it prevents
immature developers from doing stupid things. However a language, which is
suitable for beginners is rather not suitable for professionals. C++ can be
made safer, but only by cost of removing features, which professionals would
require in this language. That's why it depends, which developers the project
needs, which of them can be allocated for a project.

When I started learning C++ a couple years ago, I just accepted it as it is
and I never had problems with its safety. Of course, I did mistakes many times
and there was also a lot of problems very hard to detect, but I never thought
that hardening it is a way the C++ should walk. For "ignorant" developers
maybe Java would be more suitable.

For a complicated language (that is, a language that provides a lot of
capabilities and programming paradigms) there is only one way to use it
correctly: learning. And for "ignorant" developers there's no hope. Hardening
C++ is not a help for that.


--
// _ ___ Michal "Sektor" Malecki <sektor(whirl)kis.p.lodz.pl>
\\ L_ |/ `| /^\ ,() <ethourhs(O)wp.pl>
// \_ |\ \/ \_/ /\ C++ bez cholesterolu: http://www.intercon.pl/~sektor/cbx
"I am allergic to Java because programming in Java reminds me casting spells"

Walter Bright

unread,
Dec 7, 2005, 9:24:40 AM12/7/05
to

"Peter Most" <Peter...@gmx.de> wrote in message
news:4395bb80$0$27884$9b4e...@newsread4.arcor-online.net...

> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?

C++ has been around for over 20 years now, an eternity in the software
business. I've been working on C++ compilers since 1986 or so. It certainly
makes sense every once in a while to take a step back, and see if the
feature set is the best fit for current thinking and coding practice. For
example, C++'s notion of strings is based on past ideas about character sets
and is out of step with current UTF technology.

I, however, am less concerned with making C++ a safe language for beginners
than looking at it from the standpoint of, for professional programmers:

1) improving programmer productivity
2) improving program reliability
3) improving program portability
4) improving program performance
5) improving documentation
6) improving managability
7) reducing the effort necessary to master it
8) making internationalization of apps much easier

So what might such a reengineering look like? One example is the D
programming language, from www.digitalmars.com/d/

-Walter Bright, www.digitalmars.com
Digital Mars - C, C++, D programming language compilers

Rob

unread,
Dec 7, 2005, 9:23:13 AM12/7/05
to
Peter Most wrote:

> Hello everybody,
>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for
> beginner?
>
> I'm assuming there are two kind of beginner:
> 1) A beginner who, through reading and education, will eventually
> become an advanced developer and maybe even an expert developer. For
> this discussion I simple call them "novice" developer.
> 2) A beginner who doesn't read or educate himself and simply muddles
> through. I would like to call this kind of developer "ignorant"
> developer.
>
> Now both kind of developers start to program and sooner or later they
> start to use arrays. Both will probably at some point start to use a
> vector. I'm also assuming that both developers have difficulties with
> the index and the designer of vector obviously also assumed this, why
> else would there an at()?

I'll leave that one for someone who knows something about the design
rationale of the STL. :-)

>
> The novice developer reads more about vector and learns, that there
> are two ways to access the elements. A fast but unsafe way via the
> operator[] and a version which is little slower but safer way via
> at(). He starts to use at() because it is a lot safer and he doesn't
> need that last bit of speed.

No. The novice developer is unlikely to pick up on existance of at(),
but will be more careful in use of operator[]. This isn't a fault of
the novice: it's because most examples (at least those that don't
involve using iterators) in introductory texts use operator[]

>
> The ignorant developer also starts to use vector but doesn't learn
> that there a two interface, so he still uses operator[] and still has
> problems and bugs with the index access.

In my experience, vector is a step up for the ignorant developer. They
are more likely to do this (for an array of ints);

int *array;
array[some_index] = 42;

The only way you could prevent this would be to remove pointers from
the language.

>
> So my question is now, would it not have been better to reverse the
> semantic of the two versions, meaning operator[] checks the index and
> throws an exception and at() would allow an unchecked access? I'm
> assuming of course that either beginner starts to use the vector via
> the operator[]. If this assumption would be correct, then a beginner
> would start with the safe version and maybe, if he needs that
> additional speed, later use the at() version.
>
> This is of course easiest done in libraries, because then the language
> doesn't have to be changed. But I wonder whether it would make sense
> also in a new language design. It wouldn't be C++ anymore, because it
> would break backwards compatibility, but maybe a worthy C++ successor?

What you are describing is the same sort of thinking that resulted in
Java: they removed features from the language they din't like, and
added others that they claimed were safer.

The problem with that is that such a language is offputting to someone
with any experience. With a disciplined approach, the benefits of a
"safe" language or library are relatively limited. And can actually be
limiting, by eliminating some powerful techniques.

>
> To give some background on this thoughts:
>
> I'm developing in C++ for about 15 years and in recent years I got the
> feeling, that C++ is loosing ground to other languages like Java and
> C#. I simply love the sheer power I have with C++ and if I shoot
> myself in the foot, so to speak, I simply learn to aim better, but I
> also learned, that for beginners it is often to difficult to learn.

C++ is losing ground to languages like Java and C# in problems areas
that are better suited to using Java or C#. That's the way it should
be. If you are doing a job that Java is designed to do well, you are
better off using Java than C++. But if you are doing a job that is
outside the realm of what Java does well, you will be better off
choosing another language (and C++ may well be the better choice).


> There are simply to many mistakes a beginner can make and only trough
> education does he learn to avoid those beginner mistakes. Mind you,
> I'm only referring to unsafe constructs which led to crashes. I'm not
> talking about supposedly complex features like operator overloading
> or multiple inheritance. If a beginner doesn't understand these
> features he probably won't use them. But because of the forementioned
> ignorant developers, the language is getting a bad reputation for
> it's "unsafety" which also influences decision makers to abandon C++
> projects. I read of course Bjarne Stroustrup's excellent posting
> (http://www.research.att.com/~bs/blast.html) where he addresses some
> of the issues. But the questions remains:
>
> Should C++ changed to be a more beginner friendly language?
>

My experience with decision makers (at least the rational ones) who
abandon C++ in favour of a language like Java is that they don't do it
because of safety. They do it because they find they are doing things
that are better suited to Java. One company I know of uses Java for
initial rapid prototyping (eg convince potential users that particular
features are worthwhile) but then hand the specification over to
another team that uses C++ to implement the "production version" in
which things like runtime performance are more critical.

There is some element of Java advocacy in a lot of new recruits (in
part because universities have gotten onto the bandwagon of teaching
Java). That is actually a self-licking icecream: it is easier to find
developers who know Java, but it is just as difficult to find a *good*
developer who knows Java as it is to find a good developer in C++. So
going back to Java simply because it is "safer" has a side-effect of
encouraging mediocre development standards because it is easier to find
programmers who can program with that safety net.

That's not a fault of Java (or any other language), per se. It is a
problem with the expectations that some managers have because they are
using a "safe" language --- they lower the standards when selecting
developers. The reality is that a good developer will be a good
developer regardless of programming language (as long as they are given
time and mentoring to learn a new language effectively)

kanze

unread,
Dec 7, 2005, 9:29:22 AM12/7/05
to
Peter Most wrote:

> I would like to get some opinions about whether it would make
> sense to change the C++ design principles, so it would become
> safer for beginner?

Making the language safer is in general a good idea, even for
non-beginners. The question is rather how, and at what price.
(Note that some languages which claim "safety" are actually less
safe than C++, at least in the hands of someone who knows what
he is doing.)

> I'm assuming there are two kind of beginner:
> 1) A beginner who, through reading and education, will
> eventually become an advanced developer and maybe even an
> expert developer. For this discussion I simple call them
> "novice" developer.

> 2) A beginner who doesn't read or educate himself and simply
> muddles through. I would like to call this kind of developer
> "ignorant" developer.

I'd hesitate to call that one a developer.

> Now both kind of developers start to program and sooner or
> later they start to use arrays. Both will probably at some
> point start to use a vector. I'm also assuming that both
> developers have difficulties with the index and the designer
> of vector obviously also assumed this, why else would there an
> at()?

Why there is a function at() is a good question? There are
probably specific cases where it is reasonable to wait for the
vector itself to verify the validity of an index, and to report
the error with an exception, but the are certainly rare.

> The novice developer reads more about vector and learns, that
> there are two ways to access the elements. A fast but unsafe
> way via the operator[] and a version which is little slower
> but safer way via at(). He starts to use at() because it is a
> lot safer and he doesn't need that last bit of speed.

What makes you say that there is any difference in speed. I
would expect that in the default mode, operator[] and at() have
about the same performance. The only difference is that at()
raises an exception, so represents an error which is, in some
way, expected (or expectable), albeit exceptional, where as
operator[] will provoke an immediate core dump -- an assertion
failure.

Like other assertions, of course, this can be turned of in
production code, if the profiler shows it necessary.
Regretfully, I fear that in most library implementations,
turning off checking is an all or nothing proposition, and some
of the other checks are expensive enough that one will have to
turn them off in production code. This is a weakness of the
current implementations, however, and not of the standard.

> The ignorant developer also starts to use vector but doesn't
> learn that there a two interface, so he still uses operator[]
> and still has problems and bugs with the index access.

A developer who has problems with indexes has problems.
Regardless of what the library does. Using operator[], and
crashing immediately, is probably the best solution. But note
that if he has problems with indexes, it won't always be
apparent -- accessing the index 2 when the data needed is at
index 1 will not cause an immediate error if the vector has a
size of 10, but will result in something worse than a program
crash -- a silently incorrect result.

> So my question is now, would it not have been better to
> reverse the semantic of the two versions, meaning operator[]
> checks the index and throws an exception and at() would allow
> an unchecked access?

That sounds like a good way of increasing the danger.

The C++ standard has no concept of "must crash" (except in the
case of the library functions assert() and abort()). Maybe this
is what is needed. And a requirement that operator[] act as if
an assertion failure had occured in the case of an illegal
index. (If this is done, of course, one should add an unsafe_at
function which could be used in cases where the profiler
indicates that bounds checking is a bottle neck.)

[...]

> To give some background on this thoughts:

> I'm developing in C++ for about 15 years and in recent years I
> got the feeling, that C++ is loosing ground to other languages
> like Java and C#.

For some things. The reasons have nothing to do with the
relative safety of the languages, however, but with the
available infrastructure -- it's possible to write server side
dynamic web pages in C++, but the available infrastructures make
it far easier in Java.

It's interesting to note that Java hasn't replaced C++ in
critical applications, because it isn't safe enough. In the
hands of an organization which knows what it is doing, C++ is in
fact far safer than Java.

> I simply love the sheer power I have with C++ and if I shoot
> myself in the foot, so to speak, I simply learn to aim better,
> but I also learned, that for beginners it is often to
> difficult to learn. There are simply to many mistakes a
> beginner can make and only trough education does he learn to
> avoid those beginner mistakes. Mind you, I'm only referring to
> unsafe constructs which led to crashes. I'm not talking about
> supposedly complex features like operator overloading or
> multiple inheritance. If a beginner doesn't understand these
> features he probably won't use them. But because of the
> forementioned ignorant developers, the language is getting a
> bad reputation for it's "unsafety" which also influences
> decision makers to abandon C++ projects. I read of course
> Bjarne Stroustrup's excellent posting
> (http://www.research.att.com/~bs/blast.html) where he
> addresses some of the issues. But the questions remains:

> Should C++ changed to be a more beginner friendly language?

I think that there are some things that do need improvement.
Not just for beginners -- I've over fifteen years experience
with the language, and I still occasionally end up declaring a
function when I mean to define a variable. I'm not convinced
that safety, per se, is the problem, however.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Branimir Maksimovic

unread,
Dec 7, 2005, 9:33:23 AM12/7/05
to

Peter Most wrote:
> Hello everybody,
>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?
>
No. C++ is good in it's area. You can easilly write real word
applications
that will perform as expected. No run time surprises there. You get
what you write.

>
> Should C++ changed to be a more beginner friendly language?

If that means complex compilers then answer is no, for sure.
Those friendly languages get ugly when tryng to do some
havy duty stuff with reasonable performance and resource usage
. In that case they are not beginner friendly as all unsafe features
are there multiplied by lack of support for them in "safe"
environment. Also begginer friendly languages usually limit the number
of ways in which program architecture can be implemented.
So C++ will never be some easy language but many things can
be done more quickly and more easilly then many others once
you grasp it.
It's just a pragmatic thing. Whenever is possible to use some
other language I always prefer other, but in many cases
it's just not applicable or I'm not satisfied with performance
or behavior of other language and that I can't usually change
without waiting for patch or next version of language or.....

Greetings, Bane.

benben

unread,
Dec 7, 2005, 9:30:06 AM12/7/05
to
Peter Most wrote:
> Hello everybody,

Hello Peter!

>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?
>

> [snipped]


>
> The ignorant developer also starts to use vector but doesn't learn that
> there a two interface, so he still uses operator[] and still has problems
> and bugs with the index access.

Actually, I don't see how this happens. If you are accessing an element
out of range both operator[] and at() will fail. The only difference is
that operator[] almost always crash the program immediately, whereas the
at() will throw an exception. Now to handle the exception gracefully is
a much more challenging task than knowing at() and operator[] in itself.
It is very unlikely that the Ignorant will care to handle the exception
properly. Therefore the program crashes all the same.

>
> So my question is now, would it not have been better to reverse the semantic
> of the two versions, meaning operator[] checks the index and throws an
> exception and at() would allow an unchecked access? I'm assuming of course
> that either beginner starts to use the vector via the operator[]. If this
> assumption would be correct, then a beginner would start with the safe
> version and maybe, if he needs that additional speed, later use the at()
> version.

I am not too sure about this. Perhaps the designer decides to make the
overhead more "visible" to the user. Viz. most people feel just from the
accessing code that at() is potentially a more costly operation because
it looks like so (consider ++a and increment(a).)

Furthermore, it is not impossible for the C++ system to have a safe
operator[] optionally for debug builds.

>
> This is of course easiest done in libraries, because then the language
> doesn't have to be changed. But I wonder whether it would make sense also
> in a new language design. It wouldn't be C++ anymore, because it would break
> backwards compatibility, but maybe a worthy C++ successor?

In fact, if you want to make this little change to the core language
then possible many people will want their changes, too, in the core
language. You will have an inexhaustible list of demands and that alone
will make the language several times bigger and more complex than what
C++ already is.

The reason why C++ is so complex (as most people think) is, in my
opinion, that C++ tries to address to a wide range of audience. So
within the core language which everyone needs to make an agreement upon
the only viable version of accessing an array is, well, the unsafe built
in array.

Others who thinks the built in array facility is a bad idea can
implement their own version the suit their needs. This is the beauty of
C++ you rarely see in other languages. After all, these better versions
mostly will make use of the "unwanted" built in array facility or other
implementations which uses the built in array facility. This justifies
why the built in array is there.

>
> To give some background on this thoughts:
>
> I'm developing in C++ for about 15 years and in recent years I got the
> feeling, that C++ is loosing ground to other languages like Java and C#. I
> simply love the sheer power I have with C++ and if I shoot myself in the
> foot, so to speak, I simply learn to aim better, but I also learned, that
> for beginners it is often to difficult to learn. There are simply to many
> mistakes a beginner can make and only trough education does he learn to
> avoid those beginner mistakes. Mind you, I'm only referring to unsafe
> constructs which led to crashes. I'm not talking about supposedly complex
> features like operator overloading or multiple inheritance. If a beginner
> doesn't understand these features he probably won't use them. But because
> of the forementioned ignorant developers, the language is getting a bad
> reputation for it's "unsafety" which also influences decision makers to
> abandon C++ projects. I read of course Bjarne Stroustrup's excellent
> posting (http://www.research.att.com/~bs/blast.html) where he addresses
> some of the issues. But the questions remains:

Here are some points to make:

1. You simply can't expect the Igorants to become a good programer. Not
in C++, nor in any other languages (such as Java)

2. Every language have their own set of unsafe constructs. C++ does, do
do Java, C#, Ada, etc. For example, the array in Java is not 100% type
safe. These unsafe constructs are inevitable because they are essential,
or compromises to balance other potential unsafeties. The bottom line
is, unsafe construct must look obvious. Honestly, there are many
not-so-obvious dangers in C++ which you and me and many other users will
have to face, but compare to the real world programming, the lauguage
itself is usually a small chapter of a long story.

3. C++ was never an educational language, IMO, and it will be unlikely
to be so in the near future. I am under the impression that an
educational language will stay an educational language (Pascal); a RAD
language will stay a RAD language (Visual Basic.)

4. The ugliest part of C++ IMO is things like declaring a const pointer,
a pointer to const object, a const pointer to a function taking a const
reference to a const object and returns a reference to a pointer to a
const object, etc. Today, shame to say, I still don't master this area
in the language. Fortunately, I can typedef all these in templates,
which is very useful.

5. Sometimes an immediate crash is not a bad thing. This means the
problem is determinible and can mostly caught by a debugger. Exceptions
too can crash the program if not handled properly. The point is, never
let the program run with a problem unnoticed.

>
> Should C++ changed to be a more beginner friendly language?

Yes. But probably not in a way you have described.

>
> I'm very interested in hearing your opinions.
>
> Kind regards
>
> Peter
>

Regards,
Ben

John Christopher

unread,
Dec 7, 2005, 9:59:11 AM12/7/05
to
It is more a question of writing a tutorial that introduces as few features
as possible and still allows the novice to do things. I would not advice to
change the language; it is pretty good like that and stability is such a
virtue.

Kai-Uwe Bux

unread,
Dec 7, 2005, 10:59:43 AM12/7/05
to
David Abrahams wrote:

> Peter Most <Peter...@gmx.de> writes:
>
>> Hello everybody,
>>
>> I would like to get some opinions about whether it would make sense to
>> change the C++ design principles, so it would become safer for beginner?
>>

[claim that at() is safer than operator[] in std::vector]


>>
> Safer how? It won't make an unintentional out-of-bounds access any
> more correct.

Safer in that it avoids undefined behavior, which can (and for some classes
of mistakes -- like out of bounds access or dereferencing a deleted object
-- often does) result in behavior that covers up the bug.

> Incorrect code can always have unintended (i.e. unsafe) behaviors.

Still, there is nothing wrong with thinking about how to reduce the
likelihood.


Best

Kai-Uwe Bux

Kai-Uwe Bux

unread,
Dec 7, 2005, 11:00:04 AM12/7/05
to
Peter Most wrote:

> Hello everybody,
>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?
>

[snipped: suggestion to swap at() and operator[] in std::vector]
>

I do not think this is a good idea. I would rather have operator[] do
something like:

reference operator[] ( size_type pos ) {
assert( pos < this->size() );
return ...;
}

so that if DEBUG is defined I will have bug detection, and I do not have a
performance penalty in production code.

It might be even better to have a second incarnation of assert for like
std_assert that is used by the standard library to enforce contracts and
that kicks in whenever _STD_ENFOCRE_CONTRACTS is defined. The idea would be
to turn as much undefined behavior into defined runtime errors as feasible.


Best

Kai-Uwe Bux

Peter Most

unread,
Dec 7, 2005, 11:03:48 AM12/7/05
to
David Abrahams wrote:

Safer in the way that it is not simply crashing, but throws an exception
which hopefully will be caught somewhere. At least that way you can add a
toplevel exception handler and know that this error won't crash your
program.

Peter Most

unread,
Dec 7, 2005, 1:53:36 PM12/7/05
to
Sektor van Skijlen wrote:

> Dnia 6 Dec 2005 13:17:34 -0500, Peter Most skrobie:

[snip]

> Regarding what you mean in the "beginner" term, I think it would, but with
> your proposition it wouln't either.
>
> The problem of safety in C++ is not only in case of indexing vector out of
> bounds, but also other things, like accessing objects in freed memory. I'd

Well I have to start somewhere ;-)

> rather see the solution in making appropriate high-level environment in
> which the program in C++ runs so that every invalid access will end up
> with an exception. Cheching bounds in [] can be added by an implementation
> as well, on a direct request.
>
> This change in Standard terms would change also existing programs, which
> is not what "non-beginners" want.
>

But would it change existing programs for the worse?

[snip]

> This thought is still worth taking into consideration. But still the only
> way that leads to a safe usage of C++ is learning. So if a developer is
> assigned to a work in a production environment, there must be someone to
> look at this code and detect such problematic usages. I heard, for
> example, that inability to overload operators in Java is its advantage.
> Why? Because it prevents immature developers from doing stupid things.

I tried to find examples of such "stupid things" and couldn't find a good
example. The best I could find was some example where a developer overloads
the '+=' for a container but actually removes an element (or something like
that). The argument goes on to explain that with methods this doesn't
happen, completely ignoring that a developer could write an 'add' method
which also removes an element or formats your harddrive.
So if an immature developer would overload the '+=' operator in such a way,
who can say if he wouldn't write the 'add' in a similar way?

> However a language, which is suitable for beginners is rather not suitable
> for professionals. C++ can be made safer, but only by cost of removing
> features, which professionals would require in this language. That's why
> it depends, which developers the project needs, which of them can be
> allocated for a project.
>

The idea was to provide 2 sets of features, like the vector example. One
safe (default) feature for the beginner i.e. operator[] with range check
and a second set i.e. at() without range check.

Kind regards Peter

Peter Most

unread,
Dec 7, 2005, 1:55:22 PM12/7/05
to
Rob wrote:

> Peter Most wrote:
>
[snip]

>> The novice developer reads more about vector and learns, that there
>> are two ways to access the elements. A fast but unsafe way via the
>> operator[] and a version which is little slower but safer way via
>> at(). He starts to use at() because it is a lot safer and he doesn't
>> need that last bit of speed.
>
> No. The novice developer is unlikely to pick up on existance of at(),
> but will be more careful in use of operator[]. This isn't a fault of
> the novice: it's because most examples (at least those that don't
> involve using iterators) in introductory texts use operator[]
>

That's the point I'm trying to make. If operator[] would check the index
then the novice developer would start to use the safe version.

> What you are describing is the same sort of thinking that resulted in
> Java: they removed features from the language they din't like, and
> added others that they claimed were safer.
>
> The problem with that is that such a language is offputting to someone
> with any experience. With a disciplined approach, the benefits of a
> "safe" language or library are relatively limited. And can actually be
> limiting, by eliminating some powerful techniques.
>

A little further down my post I explain, that I don't want to remove
anything from the language ;-) but rather "change" or "tweak" it a little.
Although one could argue, that it is time to remove unsafe functions like
strcat(), strcpy() etc.

Kind regards Peter

Robert Kindred

unread,
Dec 7, 2005, 2:10:20 PM12/7/05
to

"Peter Most" <Peter...@gmx.de> wrote in message
news:4395bb80$0$27884$9b4e...@newsread4.arcor-online.net...
> Hello everybody,
>
> I would like to get some opinions about whether it would make sense to
> change the C++ design principles, so it would become safer for beginner?
>
[]

>
> Should C++ changed to be a more beginner friendly language?
>
> I'm very interested in hearing your opinions.
>
> Kind regards
>
> Peter

When I was in school, I was very impressed by what was called the Fortran
Checkout Compiler. It was used by students on a mainframe, and there was
nothing that they could do to break the system, or affect other students on
the mainframe. It ran quite slow, with all of its checking, but there was
no need to change the Fortran programs for this extra safety. These same
programs, once they were fully debugged and tested, could be recompiled, if
desired, on a "normal" Fortran compiler to run full speed.

C++ Builder by Borland has a similer feature called CodeGuard (which I
am going to miss, because we are leaving Builder for Visual Studio), which
operates in much the same way as the Fortran Checkout Compiler. There is
another tool I hope to become acquainted with called BoundsChecker. These
tools add much checking to programs, which slow them down considerably, but
make them safe. The big plus for me, is it is a make option to turn it on
or off. This is the way I would recommend beginners use C++, in addition to
maximizing the warning level.

Another thought that has occurred to me is that if the above tools work
well enough, they could make C++ safe enough to be run in a browser. There
is a difference, however, between program correctness and good program
behavior. Still, I wouldn't mind at all seeing a tool to make it possible
that a browser would be willing to run my C++ application, provided it was
identified to be compiled in "good behavior mode".

my 0.02,

Robert Kindred

Walter Bright

unread,
Dec 7, 2005, 6:25:35 PM12/7/05
to

"Sektor van Skijlen" <etho...@pl.wp.spamu.lubie.nie.invalid> wrote in
message news:dn4o7b$s62$1...@kujawiak.man.lodz.pl...

> For a complicated language (that is, a language that provides a lot of
> capabilities and programming paradigms) there is only one way to use it
> correctly: learning. And for "ignorant" developers there's no hope.
Hardening
> C++ is not a help for that.

I must say I disagree with this somewhat fatalistic viewpoint. Complexity
doesn't necessarilly come from power. It often comes from:

1) inconsistency
2) a poor match of language design with the paradigm used
3) attempts to retain backwards compatibility with obsolete features
4) poor understanding of what the problem is
5) adherance to ideas that are conventionally assumed to be true, but are
not
6) features that are inconsistent with intuition

Anyone can design something complicated. Genius is in discovering the
underlying simplicity and designing to that. A good test of genius in design
is when, after it is introduced, everyone else slaps their head and thinks
"of course that's the way it should be, it's obvious."

For example, consider the evolution of the design of guns. They've gotten
far more capable, reliable, flexible, etc., but are much simpler and safer
to use. The revolutionary improvements made are so obvious to us now, we
wonder why nobody thought of it before the genius that did, and we find it
difficult to imagine making a gun any other way.

Back to C++, and for an example of an inconsistency, consider this example
from C++ 98 13.4-5:
-------------------
struct X {
int f(int);
};

int (X::*p1)(int) = &X::f; // OK
int (X::*p5)(int) = &(X::f); // error: wrong syntax for pointer to member
-----------------------------
This is the only place in C++ where parenthesizing an expression changes its
semantic meaning other than operator precedence. It's inconsistent, it's
buried in the spec, and it requires a special kludge in the compiler
implementation to make it work. I've never found an explanation for it.

Sure, this is very obscure, and even the C++ experts don't know it's there.
But there are some that do trip people up:

1) the well-known template > angle bracket tokenizing problems
2) inconsistent behavior between std::string's and quoted string literals
3) inconsistent behavior between core arrays, and std::vector
4) inconsistent and counterintuitive name lookup rules for dependent and
non-dependent names
5) two level name lookup

Are these inconsistencies necessary to get the power? I don't believe so.

Walter Bright
www.digitalmars.com C, C++, D programming language compilers

Peter Most

unread,
Dec 8, 2005, 6:52:43 AM12/8/05
to
kanze wrote:

>> The novice developer reads more about vector and learns, that
>> there are two ways to access the elements. A fast but unsafe
>> way via the operator[] and a version which is little slower
>> but safer way via at(). He starts to use at() because it is a
>> lot safer and he doesn't need that last bit of speed.
>
> What makes you say that there is any difference in speed. I
> would expect that in the default mode, operator[] and at() have
> about the same performance. The only difference is that at()
> raises an exception, so represents an error which is, in some
> way, expected (or expectable), albeit exceptional, where as
> operator[] will provoke an immediate core dump -- an assertion
> failure.
>

It's probably very small if it can be measured at all, but the range check
in at() isn't free. But the question remains, why are there two versions to
access the elements?

>> The ignorant developer also starts to use vector but doesn't
>> learn that there a two interface, so he still uses operator[]
>> and still has problems and bugs with the index access.
>
> A developer who has problems with indexes has problems.
> Regardless of what the library does. Using operator[], and
> crashing immediately, is probably the best solution. But note
> that if he has problems with indexes, it won't always be
> apparent -- accessing the index 2 when the data needed is at
> index 1 will not cause an immediate error if the vector has a
> size of 10, but will result in something worse than a program
> crash -- a silently incorrect result.
>

I have to disagree. Crashing is never a good solution. If the program stops
because the exception wasn't handled, then it's probably OK.

>> So my question is now, would it not have been better to
>> reverse the semantic of the two versions, meaning operator[]
>> checks the index and throws an exception and at() would allow
>> an unchecked access?
>
> That sounds like a good way of increasing the danger.
>

What danger would it increase?

> The C++ standard has no concept of "must crash" (except in the
> case of the library functions assert() and abort()). Maybe this

If an exception isn't handled, then basically it "crashes" but in a less
dramatic way.

> For some things. The reasons have nothing to do with the
> relative safety of the languages, however, but with the
> available infrastructure -- it's possible to write server side
> dynamic web pages in C++, but the available infrastructures make
> it far easier in Java.
>

That's probably another reason why I have the feeling C++ is loosing ground,
because of the lack of standard libraries. The Boost library is making
excellent progress to remedy this, but the huge companies behind Java and
C# make it very hard to catch up.

> It's interesting to note that Java hasn't replaced C++ in
> critical applications, because it isn't safe enough. In the
> hands of an organization which knows what it is doing, C++ is in
> fact far safer than Java.
>

But that's exactly my point, that those developers who don't know what they
are doing, are giving C++ a bad reputation.

benben

unread,
Dec 8, 2005, 6:51:34 AM12/8/05
to
> Another thought that has occurred to me is that if the above tools work
> well enough, they could make C++ safe enough to be run in a browser. There
> is a difference, however, between program correctness and good program
> behavior. Still, I wouldn't mind at all seeing a tool to make it possible
> that a browser would be willing to run my C++ application, provided it was
> identified to be compiled in "good behavior mode".

Don't you think C++ is a bit too powerful for browsers? Letting the C++
program to use pointers means it can access anything within its memory
space, which may include the browser itself if the code is dynamically
linked.

Ben

kanze

unread,
Dec 8, 2005, 11:16:23 AM12/8/05
to
Robert Kindred wrote:

> Another thought that has occurred to me is that if the
> above tools work well enough, they could make C++ safe
> enough to be run in a browser.

I run C++ all the time in my browser. The browser itself
(Firefox on my Linux box, Netscape under Solaris) is written in
C++, as are, I imagine, a number of plug-ins.

> There is a difference, however, between program correctness
> and good program behavior.

A good program can still be a bad neighbor. Some of the viruses
are very well written programs, and "correct" by all classical
measures. When running unknown code, you run it as a user with
no rights, and the OS protects you (or should).

> Still, I wouldn't mind at all seeing a tool to make it
> possible that a browser would be willing to run my C++
> application, provided it was identified to be compiled in
> "good behavior mode".

Part of the problem is that C++ is designed to be statically
compiled. It should be possible to define a portable "byte
code" with an interpreter for it, but to my knowledge, no one
has done so. It would be very difficult for a browser to run
code compled for Windows on a PC on my Sparc under Solaris. THe
current state of affairs is that to run C++ directly, my browser
would have to download the sources, and compile and link them.

So my browsers limit themselves to running C++ which has already
been compiled for my platform.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 8, 2005, 11:21:28 AM12/8/05
to

> > Peter Most <Peter...@gmx.de> writes:

But that makes it more dangerous. The only "good" thing that
wrong code can do is crash, so that you know it's wrong. And
the sooner it crashes, the better. The unsafe behavior of
operator[] is that it might not crash. That you might not even
notice the error, and the program will simply give bad results.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 8, 2005, 11:21:50 AM12/8/05
to
Peter Most wrote:
> Rob wrote:

> > Peter Most wrote:

> [snip]
> >> The novice developer reads more about vector and learns,
> >> that there are two ways to access the elements. A fast but
> >> unsafe way via the operator[] and a version which is little
> >> slower but safer way via at(). He starts to use at()
> >> because it is a lot safer and he doesn't need that last bit
> >> of speed.

> > No. The novice developer is unlikely to pick up on
> > existance of at(), but will be more careful in use of
> > operator[]. This isn't a fault of the novice: it's
> > because most examples (at least those that don't involve
> > using iterators) in introductory texts use operator[]

> That's the point I'm trying to make. If operator[] would check
> the index then the novice developer would start to use the
> safe version.

You don't seem to be too familiar with C++ to begin with.
Calling operator[] with an out of bounds index is undefined
behavior precisely so that an implementation *can* check the
index, and do whatever is appropriate *on* *that* *platform* for
a programming error. On Unix, I expect a core dump, but from
what I understand, this is not the usual case under Windows.

Most serious implementations of the standard library today come
with debugging versions, which check not only the index, but a
lot of other things. I would expect that to be the version a
beginner is using -- there is probably some work to be done
concerning compiler options and defaults, for this to be the
case, but compiler options and defaults are not a subject of
standardization.

> > What you are describing is the same sort of thinking that
> > resulted in Java: they removed features from the language
> > they din't like, and added others that they claimed were
> > safer.

I view the situation a little different. Java's designers
decided that one specific level of safety is appropriate for
all. It's implemented in the language, and you can't change it.
C++ leaves a lot more up to the implementation and the
programmer. It can be as dangerous OR as safe as you want.

IMHO, the built in level of safety in Java is too low for
serious applications, and since you cannot increase it, Java as
largely unusable for large classes of applications, where
reliability is a must. And while I sometimes wish that the
default level of safety of C++ was a little higher, it's not
that difficult to make it as safe as is needed, and I have no
qualms about using it in critical software.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 8, 2005, 11:20:45 AM12/8/05
to
David Abrahams wrote:
> Peter Most <Peter...@gmx.de> writes:

Well, if an out of bounds access in operator[] were required to
abort, it would be safer in the sense that bad programs won't
run as wrong, and no one will accidentally interpret bad results
as good. (I know that wasn't what he was proposing, but it's
the only "improvement" which makes sense.)

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 8, 2005, 11:22:12 AM12/8/05
to
Peter Most wrote:
> kanze wrote:

> >> The novice developer reads more about vector and learns,
> >> that there are two ways to access the elements. A fast but
> >> unsafe way via the operator[] and a version which is little
> >> slower but safer way via at(). He starts to use at()
> >> because it is a lot safer and he doesn't need that last bit
> >> of speed.

> > What makes you say that there is any difference in speed. I
> > would expect that in the default mode, operator[] and at()
> > have about the same performance. The only difference is
> > that at() raises an exception, so represents an error which
> > is, in some way, expected (or expectable), albeit
> > exceptional, where as operator[] will provoke an immediate
> > core dump -- an assertion failure.

> It's probably very small if it can be measured at all, but the
> range check in at() isn't free. But the question remains, why
> are there two versions to access the elements?

Because there are cases when accessing with an index out of
bounds isn't an error, but simply an exceptional condition from
which you want to recover. Such cases aren't all that frequent,
but when they occur, at() is there. (One could argue that such
cases are rare enough not to require support in the standard
library -- it is, after all, easy enough to write the indexing
with an if.)

> >> The ignorant developer also starts to use vector but
> >> doesn't learn that there a two interface, so he still uses
> >> operator[] and still has problems and bugs with the index
> >> access.

> > A developer who has problems with indexes has problems.
> > Regardless of what the library does. Using operator[], and
> > crashing immediately, is probably the best solution. But
> > note that if he has problems with indexes, it won't always
> > be apparent -- accessing the index 2 when the data needed is
> > at index 1 will not cause an immediate error if the vector
> > has a size of 10, but will result in something worse than a
> > program crash -- a silently incorrect result.

> I have to disagree. Crashing is never a good solution. If the
> program stops because the exception wasn't handled, then it's
> probably OK.

And if the program doesn't stop? Crashing is the accepted way
of handling programming errors. It's required in critical
systems, but it is also generally a good idea in most
non-critical systems, at least those which handle real data, and
whose results are used. Masking errors may give the user a
warm, fuzzy fealing, but it also gives him wrong results.

> >> So my question is now, would it not have been better to
> >> reverse the semantic of the two versions, meaning
> >> operator[] checks the index and throws an exception and
> >> at() would allow an unchecked access?

> > That sounds like a good way of increasing the danger.

> What danger would it increase?

The programmers will continue to use operator[], and get an
exception (which in newbie code may end up being caught and
ignored). The real danger, of course, is that a program with an
error continues to run, and that a user counts on its results,
and treats them as correct. This is a danger which must be
avoided at all costs.

> > The C++ standard has no concept of "must crash" (except in
> > the case of the library functions assert() and abort()).
> > Maybe this

> If an exception isn't handled, then basically it "crashes" but
> in a less dramatic way.

If an exception isn't handled, abort() is called. The only
difference between this and an assertion failure is that you
don't get an error message, and the stack has been unwound. The
first is a pain, since you don't know why or where the program
went wrong, and the second is a real disaster -- you have an
inconsistent program state, and you want to run around executing
who knows what additional code.

> > For some things. The reasons have nothing to do with the
> > relative safety of the languages, however, but with the
> > available infrastructure -- it's possible to write server
> > side dynamic web pages in C++, but the available
> > infrastructures make it far easier in Java.

> That's probably another reason why I have the feeling C++ is
> loosing ground, because of the lack of standard libraries. The
> Boost library is making excellent progress to remedy this, but
> the huge companies behind Java and C# make it very hard to
> catch up.

It's more than just libraries -- it's a complete infrastructure.
But it's true that the lack of libraries does hurt in some
cases; if I had to write a GUI front-end today, I'd do it in
Java, because Swing is fairly good, it's portable, and it's
standard. There are some portable GUI libraries for C++, of
course, but you don't automatically expect the next C++
programmer on the project to know them -- he may have used a
different library on his last project.

On the other hand, the fact that Java raises an exception in
case of a bounds check error, instead of aborting, means that
I'm not about to use it in any program which handles important
data.

> > It's interesting to note that Java hasn't replaced C++ in
> > critical applications, because it isn't safe enough. In the
> > hands of an organization which knows what it is doing, C++
> > is in fact far safer than Java.

> But that's exactly my point, that those developers who don't
> know what they are doing, are giving C++ a bad reputation.

Developers who don't know what they are doing did give C++ a bad
reputation in the past. But developers who don't know what they
are doing tend to go to the trendiest language -- they've since
given Java a worse reputation than it deserves, and today,
they're all working in C#, so C++ doesn't have to worry. Unless
it becomes the latest trend again:-).

Seriously, a developer who doesn't know what he is doing will
write incorrect code in any language. Which is worse: for the
program to crash, or for it to give the user a warm fuzzy
feeling and wrong answers?

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 8, 2005, 11:21:07 AM12/8/05
to
Bob Hairgrove wrote:
> On 6 Dec 2005 13:17:34 -0500, Peter Most <Peter...@gmx.de>
> wrote:

[...]

> >Should C++ changed to be a more beginner friendly language?

> No ... isn't that why we have Java, C# and Visual Basic? ;)

I don't know about C# or Visual Basic, but I certainly wouldn't
consider Java more beginner friendly than C++. Except, perhaps,
in the sense that its syntax has less "traps". (But that's a
good thing even for experts.) It's far harder to write a robust
and correct application in Java than it is in C++.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Peter Most

unread,
Dec 8, 2005, 3:14:22 PM12/8/05
to
kanze wrote:

> Peter Most wrote:
>> Rob wrote:
>
>> > Peter Most wrote:
>
>> [snip]
>> >> The novice developer reads more about vector and learns,
>> >> that there are two ways to access the elements. A fast but
>> >> unsafe way via the operator[] and a version which is little
>> >> slower but safer way via at(). He starts to use at()
>> >> because it is a lot safer and he doesn't need that last bit
>> >> of speed.
>
>> > No. The novice developer is unlikely to pick up on
>> > existance of at(), but will be more careful in use of
>> > operator[]. This isn't a fault of the novice: it's
>> > because most examples (at least those that don't involve
>> > using iterators) in introductory texts use operator[]
>
>> That's the point I'm trying to make. If operator[] would check
>> the index then the novice developer would start to use the
>> safe version.
>
> You don't seem to be too familiar with C++ to begin with.

Please be careful with such statements!

> Calling operator[] with an out of bounds index is undefined
> behavior precisely so that an implementation *can* check the
> index, and do whatever is appropriate *on* *that* *platform* for
> a programming error. On Unix, I expect a core dump, but from
> what I understand, this is not the usual case under Windows.
>

Wouldn't it be easier to simply define that operator[] has to check the
index, then it would be one less undefined behavior?

Bob Hairgrove

unread,
Dec 8, 2005, 3:14:00 PM12/8/05
to
On 8 Dec 2005 11:22:12 -0500, "kanze" <ka...@gabi-soft.fr> wrote:

>Developers who don't know what they are doing did give C++ a bad
>reputation in the past. But developers who don't know what they
>are doing tend to go to the trendiest language -- they've since
>given Java a worse reputation than it deserves, and today,
>they're all working in C#, so C++ doesn't have to worry.

LOL ... good one!

>Unless it becomes the latest trend again:-).

Actually, I think this would actually improve the situation. More
developers (even bad ones) == more compilers sold == better support
for the C++ standard.

My reasoning?

(a) It costs lots of time/effort/money to develop a truly
standards-conforming implementation;

(b) The important compiler vendors/implementers nowadays all realize
how important it is to be standards-conforming, even if 100% remains
an unreachable goal for some;

(c) Developers will pay more attention to writing standards-conforming
code if the implementation they use enforces this;

(d) The C++ standard itself will improve when more developers use it
and provide feedback and suggestions to the standards committee.

The worst thing that could happen to C++ is to become a little-used,
"niche" or elitist language. But I see no danger of that today.

--
Bob Hairgrove
NoSpam...@Home.com

Peter Most

unread,
Dec 8, 2005, 3:16:07 PM12/8/05
to
kanze wrote:

If crashing would be such a good thing, then why check for errors at all? I
think only small, short running programs can "live" with a crash. But all
other programs simply can not accept a crash.

Andrei Alexandrescu (See Website For Email)

unread,
Dec 8, 2005, 3:19:30 PM12/8/05
to
kanze wrote:
> IMHO, the built in level of safety in Java is too low for
> serious applications,

Specifically what?

> and since you cannot increase it, Java as
> largely unusable for large classes of applications, where
> reliability is a must. And while I sometimes wish that the
> default level of safety of C++ was a little higher, it's not
> that difficult to make it as safe as is needed, and I have no
> qualms about using it in critical software.

I'd say "poppycock!" if it weren't for two reasons: (1) it's you, and
(2) I don't know exactly what "poppycock" means.

How can you make C++ "as safe as is needed"? Let's say, I need to make
C++ such that there are no soft memory errors, just like Java. (I
understand that's a pretty low bar, in wake of what you wrote above.)
How do I do that?


Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 8, 2005, 3:20:08 PM12/8/05
to
kanze wrote:
> You don't seem to be too familiar with C++ to begin with.
> Calling operator[] with an out of bounds index is undefined
> behavior precisely so that an implementation *can* check the
> index, and do whatever is appropriate *on* *that* *platform* for
> a programming error.

Huh? Could you substantiate that statement, particularly since you used
"precisely"? I thought the behavior of operator[] undefined for the sake
of speed. Otherwise it would have been very easy to define illegal
invocations of operator[] to stop execution in a way specified by the
implementation.

As far as I've known for the longest time, "undefined" often stands for
"for the sake of generating the fastest code possible when the program
has no error."


Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 8, 2005, 3:20:30 PM12/8/05
to
kanze wrote:
> Bob Hairgrove wrote:
>
>>On 6 Dec 2005 13:17:34 -0500, Peter Most <Peter...@gmx.de>
>>wrote:
>
>
> [...]
>
>
>>>Should C++ changed to be a more beginner friendly language?
>
>
>>No ... isn't that why we have Java, C# and Visual Basic? ;)
>
>
> I don't know about C# or Visual Basic, but I certainly wouldn't
> consider Java more beginner friendly than C++. Except, perhaps,
> in the sense that its syntax has less "traps". (But that's a
> good thing even for experts.) It's far harder to write a robust
> and correct application in Java than it is in C++.

I don't have hard data, but my intuition goes the exact opposite way.
IMHO the learning curve of C++ quite resembles a wall, and the multitude
of soft errors make it very tough for a beginner to make sure they wrote
a robust and correct application. Java doesn't have soft errors, and
intuitively that should make it easier for a beginner to make a program
work by other means than pure chance.

Bus since your statement above is this resolute, perhaps you do have
some evidence. Could you please share?


Andrei

Bob Bell

unread,
Dec 8, 2005, 4:59:40 PM12/8/05
to
Peter Most wrote:
> kanze wrote:
> > Calling operator[] with an out of bounds index is undefined
> > behavior precisely so that an implementation *can* check the
> > index, and do whatever is appropriate *on* *that* *platform* for
> > a programming error. On Unix, I expect a core dump, but from
> > what I understand, this is not the usual case under Windows.
> >
> Wouldn't it be easier to simply define that operator[] has to check the
> index, then it would be one less undefined behavior?

If you want it to throw an exception like at() does, then no, I don't
think that would be easier. I think that would in many cases mask bugs
-- most of the time, calling operator[] with an out-of-bounds index is
a bug; throwing an exception when a bug occurs simply allows the
program to continue running with a bug.

I much prefer the debugging checks described by James above.

Bob

Peter Most

unread,
Dec 8, 2005, 4:58:04 PM12/8/05
to
kanze wrote:

> Peter Most wrote:
>> kanze wrote:
>
>> I have to disagree. Crashing is never a good solution. If the
>> program stops because the exception wasn't handled, then it's
>> probably OK.
>
> And if the program doesn't stop? Crashing is the accepted way
> of handling programming errors. It's required in critical
> systems, but it is also generally a good idea in most
> non-critical systems, at least those which handle real data, and
> whose results are used. Masking errors may give the user a
> warm, fuzzy fealing, but it also gives him wrong results.
>

Since when is crashing an accepted way of handling errors? It's the worst
handling!
Are you considering exception handling as masking errors?

>> >> So my question is now, would it not have been better to
>> >> reverse the semantic of the two versions, meaning
>> >> operator[] checks the index and throws an exception and
>> >> at() would allow an unchecked access?
>
>> > That sounds like a good way of increasing the danger.
>
>> What danger would it increase?
>
> The programmers will continue to use operator[], and get an
> exception (which in newbie code may end up being caught and
> ignored). The real danger, of course, is that a program with an
> error continues to run, and that a user counts on its results,
> and treats them as correct. This is a danger which must be
> avoided at all costs.
>

I agree, if a developer would actually catch and ignore the exception then
there is a huge problem and then and only then would a crash actually be
acceptable. But are there really such irresponsible developers?

>> > The C++ standard has no concept of "must crash" (except in
>> > the case of the library functions assert() and abort()).
>> > Maybe this
>
>> If an exception isn't handled, then basically it "crashes" but
>> in a less dramatic way.
>
> If an exception isn't handled, abort() is called. The only
> difference between this and an assertion failure is that you
> don't get an error message, and the stack has been unwound. The
> first is a pain, since you don't know why or where the program
> went wrong, and the second is a real disaster -- you have an
> inconsistent program state, and you want to run around executing
> who knows what additional code.
>

But it would be quite easy to add a toplevel exception handler and log the
error message. In the case of a crash you can only hope that you either are
getting some kind of core/memory dump which you can analyse with a debugger
or the program is already running in the debugger to get the call stack.

Kind regards Peter

Bob Bell

unread,
Dec 8, 2005, 6:47:56 PM12/8/05
to
Peter Most wrote:
> kanze wrote:
>
> > Peter Most wrote:
> >> Safer in the way that it is not simply crashing, but throws an
> >> exception which hopefully will be caught somewhere. At least
> >> that way you can add a toplevel exception handler and know
> >> that this error won't crash your program.
> >
> > But that makes it more dangerous. The only "good" thing that
> > wrong code can do is crash, so that you know it's wrong. And
> > the sooner it crashes, the better. The unsafe behavior of
> > operator[] is that it might not crash. That you might not even
> > notice the error, and the program will simply give bad results.
> >
> If crashing would be such a good thing, then why check for errors at all?

To *make sure* it crashes, so that we can debug the program.

> I
> think only small, short running programs can "live" with a crash. But all
> other programs simply can not accept a crash.

But they can accept producing incorrect results? You seem to be saying
that incorrect results are preferred over crashes.

"Continue running at all costs, even when bugs are detected," simply
makes it more difficult to find and fix bugs, decreasing (not
increasing) the reliability of the program and the correctness of its
results.

(Controlled) crashing (i.e., from an assertion or other debugging
check), on the other hand, frequently makes it easier to find and fix
bugs, increasing reliability.

Bob

Branimir Maksimovic

unread,
Dec 8, 2005, 6:54:44 PM12/8/05
to
Peter Most wrote:
> Wouldn't it be easier to simply define that operator[] has to check the
> index, then it would be one less undefined behavior?
>

Main reason I'm not using 'at' is that it throws exception.
What to do with that exception? what action to perform?
try ' ing around vectors 'at' is just silly.
if I can reliably get file and line then ok but it is much easier to
place assertion and get core dump which will trace to assertion that
failed,
and one can also examine variables with debugger.
Much more usable.

Greetings, Bane.

Walter Bright

unread,
Dec 8, 2005, 6:54:21 PM12/8/05
to

"Peter Most" <Peter...@gmx.de> wrote in message
news:43988959$0$27885$9b4e...@newsread4.arcor-online.net...

> kanze wrote:
> > But that makes it more dangerous. The only "good" thing that
> > wrong code can do is crash, so that you know it's wrong. And
> > the sooner it crashes, the better. The unsafe behavior of
> > operator[] is that it might not crash. That you might not even
> > notice the error, and the program will simply give bad results.
> >
> If crashing would be such a good thing, then why check for errors at all?
I
> think only small, short running programs can "live" with a crash. But all
> other programs simply can not accept a crash.

James is right.The alternative to a failed program crashing in an obvious
manner is having it fail in an undetectable way, like for example generating
subtly corrupted results. Would you care to use, say, a bridge design
program, that generated corrupt stress numbers? Would you even know anything
was wrong with the numbers? How about a bank's financial processing program?

A crash, as early and as obvious as possible, is the only way. Then, the
user of the program *knows* it's broken and will not mistakenly rely on its
output. The developer of the program *knows* there's a bug, which is the
first step towards fixing it. And the airliner autopilot *knows* its
crashed, and the backup system can wake up the pilot before it augers into a
mountainside.

Solid, professional programs should have as much Contract Programming and
sanity tests on its internal workings as practical considering the
performance requirements. If a fault is detected, the program should
promptly inform the user and shut itself down (crash, if you will).

-Walter Bright


www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Dec 8, 2005, 7:01:23 PM12/8/05
to
"Andrei Alexandrescu (See Website For Email)" <SeeWebsit...@moderncppdesign.com> writes:

> kanze wrote:
>> You don't seem to be too familiar with C++ to begin with.
>> Calling operator[] with an out of bounds index is undefined
>> behavior precisely so that an implementation *can* check the
>> index, and do whatever is appropriate *on* *that* *platform* for
>> a programming error.
>
> Huh? Could you substantiate that statement, particularly since you used
> "precisely"? I thought the behavior of operator[] undefined for the sake
> of speed.

Exactly. On some platforms the appropriate behavior in case of such a
programming error is... hope that it never occurs, because we can't
afford the cost of checking.

> Otherwise it would have been very easy to define illegal
> invocations of operator[] to stop execution in a way specified by the
> implementation.
>
> As far as I've known for the longest time, "undefined" often stands for
> "for the sake of generating the fastest code possible when the program
> has no error."

In this case it means, "on some implementations it may be important
not to pay for the check, and on some others we don't want to restrict
the implementation's choice of behavior in case a check fails"

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Dietmar Kuehl

unread,
Dec 9, 2005, 5:49:18 AM12/9/05
to
Peter Most wrote:
> kanze wrote:
>> Calling operator[] with an out of bounds index is undefined
>> behavior precisely so that an implementation *can* check the
>> index, and do whatever is appropriate *on* *that* *platform* for
>> a programming error. On Unix, I expect a core dump, but from
>> what I understand, this is not the usual case under Windows.
>>
> Wouldn't it be easier to simply define that operator[] has to check the
> index, then it would be one less undefined behavior?

I'm pretty capable of making sure that I have no out of bounds
accesses when using arrays (of whatever form, be it built-in arrays,
'std::vector<T>', 'std::deque<T>', etc.). I don't want any redundant
checks for whether I did the right thing. I don't mind them if they
come with no extra cost but it isn't my experience that they do.

Even if there are checks, I guess we would choose different
implementations for how they deal with programming errors: I want the
implementation to "crash" (i.e. write an inspectable post-mortem image
and abort); your preference seems to be that an exception is thrown
such that you can try to continue (which is, IMO, a futile attempt
anyway: if the programmer is incapable of correctly implementing the
successful case, how will he be capable of recovering from his own
errors at all?).
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 5:55:58 AM12/9/05
to
Walter Bright wrote:
> James is right.The alternative to a failed program crashing in an obvious
> manner is having it fail in an undetectable way, like for example generating
> subtly corrupted results. Would you care to use, say, a bridge design
> program, that generated corrupt stress numbers? Would you even know anything
> was wrong with the numbers? How about a bank's financial processing program?

That's right, however the assertions being made don't really fit the
context of the discussion.

The context is: why is operator[] undefined for invalid input?

Answer: It is undefined so implementations can check and crash the
programs.

That answer is bogus. If the behavior is undefined, that means I can't
count on it. You can't count on it. We can't count on it. There's only
one who can count on it, and that's "Nobody". :o)

We can't write a standard-conforming C++ program that will "fail
rigorously" using operator[]. It's actually very likely that the program
will silently fail, thus doing everything that James says about "bad"
programs!

The natural conclusion that the answer above leads to, is that
operator[] should be defined to terminate execution for invalid input.
But it doesn't (proof that the anwer is bogus); behavior is undefined so
implementations can generate the fastest code for the correct case -
which brings us to the correct answer.

So IMHO the answer to "why is operator[] undefined for invalid input?"
is: "C++ sacrifices memory safety for the sake of efficiency. Although
some people believe that operator[] should be defined for all inputs and
at() should be the alternative with undefined behavior, this is how
things are as of today."

And I still don't understand how C++, being rife with soft errors that
travel below the type system's radar and dynamic checks alike, is safer
than Java, which has no memory errors and no undefined behavior. Please
illuminate me.


Andrei

Stephen Howe

unread,
Dec 9, 2005, 5:59:20 AM12/9/05
to
> I have to disagree. Crashing is never a good solution.

Your right, it is not good. But the better alternative is... (please fill in
the blanks)?

It also depends on the type of error.
If it some form of input error or range error, it maybe that the program can
recover gracefully, continue with an alternative course of action. That
seems right.

But if it is a logical inconsistency in the program, the type of error that
assert() is useful to catch, any action could be wrong. "Continuing" may
make it hard to diagnose what is wrong.
In the absence of anything better, termination is not bad (hopefully with
enough output to help the programmer to work out what is wrong). The
programmer can fix the logical bug and retry the program.

Stephen Howe

Dietmar Kuehl

unread,
Dec 9, 2005, 5:51:24 AM12/9/05
to
Peter Most wrote:
> kanze wrote:
>> And if the program doesn't stop? Crashing is the accepted way
>> of handling programming errors. It's required in critical
>> systems, but it is also generally a good idea in most
>> non-critical systems, at least those which handle real data, and
>> whose results are used. Masking errors may give the user a
>> warm, fuzzy fealing, but it also gives him wrong results.
>>
> Since when is crashing an accepted way of handling errors? It's the worst
> handling!

It may be the worst kind of handling from a sales point of view. After
all, visible crashes are clearly an indication of bad stability.
However, crashing is much preferable to causing damage by continuing
after an unknown and thus almost certainly only partially recovered
programming error. However, the programming error is there, whether
the program crashes or tries to stumble on. The error is obviously
not due to a [n anticipated] bad external state.

> Are you considering exception handling as masking errors?

I would consider some forms of exception handling as masking errors.
There are clearly reasonable uses of exceptions, e.g. to recover
from an anticipated error. Recovery from a programming errors is,
as I stated before, very unlikely to be successful and I would
consider trying to handle it with an exception "masking errors". For
example, an out of bounds access in an array is a programming error
unless it deliberately used a method with specified semantics in case
of an out of bounds access.

> I agree, if a developer would actually catch and ignore the exception then
> there is a huge problem and then and only then would a crash actually be
> acceptable. But are there really such irresponsible developers?

I have come across even more irresponsible developers: those who
think they can generically recover from their own errors by handling
exceptions! They were obviously incapable of creating correct code
but confident enough to think that they could correctly recover
automatically from their flaws. To my mind this is very strange
logic...

> But it would be quite easy to add a toplevel exception handler and log the
> error message.

... and continue with some corrupted state?

> In the case of a crash you can only hope that you either
> are getting some kind of core/memory dump which you can analyse with a
> debugger or the program is already running in the debugger to get the call
> stack.

I don't know what kind of operating you have done but in the places
I have worked it is custom to monitor program failures and provide
some form of reaction. At the bare minimum programs are automatically
restarted (in a clean state, of course) and a human is alerted to a
problem caused by a program crash. Each crash was investigated to
determine the reason of the crash (which involved analysis of the
core dump). This may be impractical when shipping a product in which
case you are better off testing it thoroughly such that program
crashes are extremely unlikely. In any case it makes sense to test
the software with some system detecting various kinds of programming
errors, e.g. purify and/or a debugging version of the standard
library.


--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Dietmar Kuehl

unread,
Dec 9, 2005, 5:50:35 AM12/9/05
to
Peter Most wrote:
> If crashing would be such a good thing, then why check for errors at all?

Do you check for programming errors in production code? I do not!
... and I also don't spent any time trying to recover from situations
I don't know about. After all, how can I recover from something I
don't know?

> I think only small, short running programs can "live" with a crash.

No program can live with a crash. Neither "small, short running" ones,
nor large, long running ones. However, it does not matter: if there is
a programming error, we are best off stopping as soon as possible
before causing more problems by stumbling on. Better have a loud
crash than a silent failure. Undetected problems are much worse than
those problems you don't know about.

> But all other programs simply can not accept a crash.

This is why it is important to have a good software production
process in place and verify that the software is correct. Trying to
recover from an error the programmer made is futile. After all, if
the programmer had anticipated this error he would have been better
off removing it rather than recovering from it in the first place.


--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 5:56:28 AM12/9/05
to
Branimir Maksimovic wrote:
> Peter Most wrote:
>
>>Wouldn't it be easier to simply define that operator[] has to check the
>>index, then it would be one less undefined behavior?
>>
>
>
> Main reason I'm not using 'at' is that it throws exception.
> What to do with that exception? what action to perform?
> try ' ing around vectors 'at' is just silly.

But the idea wasn't to try around 'at'. There are tons of cases in which
you can centralize error handling and return the system to a stable state.

Simple example: I load some data in a vector<Data> and an index into the
data into another vector<unsigned>. If the data is corrupt (an event of
low likelihood but possible nonetheless), at() will throw an exception,
I can report (in a centralized manner) that the data is corrupt, clear
the index, and rebuild or whatnot.

I disagree that the only thing to do in the case of an out-of-bound
access is to crash immediately. I think it's a natural outcome of our
history: we C++ programmers are so used to unchecked array access, we
have in mind that out-of-bounds access indicates a deep application
error - and we develop programs like that.

However, if we know that bounds checking is performed reliably and
reproductibly, we can develop programming styles that allow better
centralization of error handling.


Andrei

kanze

unread,
Dec 9, 2005, 10:08:33 AM12/9/05
to
Peter Most wrote:
> kanze wrote:

> > Peter Most wrote:
> >> David Abrahams wrote:

> >> > Peter Most <Peter...@gmx.de> writes:

[...]


> >> > Safer how? It won't make an unintentional out-of-bounds
> >> > access any more correct. Incorrect code can always have
> >> > unintended (i.e. unsafe) behaviors.

> >> Safer in the way that it is not simply crashing, but throws
> >> an exception which hopefully will be caught somewhere. At
> >> least that way you can add a toplevel exception handler and
> >> know that this error won't crash your program.

> > But that makes it more dangerous. The only "good" thing
> > that wrong code can do is crash, so that you know it's
> > wrong. And the sooner it crashes, the better. The unsafe
> > behavior of operator[] is that it might not crash. That you
> > might not even notice the error, and the program will simply
> > give bad results.

> If crashing would be such a good thing, then why check for
> errors at all?

Because bounds errors (and any number of other types of errors)
don't usually crash.

> I think only small, short running programs can "live" with a
> crash. But all other programs simply can not accept a crash.

Realistically, all programs must be able to live with a crash.
At least on my system, dereferencing an out of bounds pointer
will cause a crash, as will accessing through a mis-aligned
pointer. And there's really no way to catch the first of these.

In critical applications, it is an absolute rule that in the
slightest doubt concerning the correction of the program, it
must crash. Stumbling on with possibly incorrect data is not
considered an alternative.

The same thing holds for any number of other applications. Most
of the time, wrong answers are worse than no answers.

There are clearly exceptions: the most obvious is a demo program
for an exposition, where a crash is highly visible, and no one
is going to use the results, or even verify that they are
correct. But any time the results are to be used for anything
significant, it is important that they be correct, and crashing
is preferable to wrong results.

kanze

unread,
Dec 9, 2005, 10:13:16 AM12/9/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> Walter Bright wrote:
> > James is right.The alternative to a failed program crashing
> > in an obvious manner is having it fail in an undetectable
> > way, like for example generating subtly corrupted results.
> > Would you care to use, say, a bridge design program, that
> > generated corrupt stress numbers? Would you even know
> > anything was wrong with the numbers? How about a bank's
> > financial processing program?

> That's right, however the assertions being made don't really
> fit the context of the discussion.

The topic of this subthread has shifted a bit.

> The context is: why is operator[] undefined for invalid input?

> Answer: It is undefined so implementations can check and crash
> the programs.

> That answer is bogus. If the behavior is undefined, that means
> I can't count on it. You can't count on it. We can't count on
> it. There's only one who can count on it, and that's "Nobody".
> :o)

That's one way of looking at it, and I agree that requiring a
program to crash every time there is undefined behavior would be
a major step forward. (Given the difficulty of detecting some
of the cases, I'm not sure that people more concerned than I am
about performance would agree:-).)

The other way is to use an implementation that you know, which
does give the desired behavior. Said implementation would, of
course, still be conform.

> We can't write a standard-conforming C++ program that will
> "fail rigorously" using operator[]. It's actually very likely
> that the program will silently fail, thus doing everything
> that James says about "bad" programs!

> The natural conclusion that the answer above leads to, is that
> operator[] should be defined to terminate execution for
> invalid input. But it doesn't (proof that the anwer is
> bogus); behavior is undefined so implementations can generate
> the fastest code for the correct case - which brings us to the
> correct answer.

> So IMHO the answer to "why is operator[] undefined for invalid
> input?" is: "C++ sacrifices memory safety for the sake of
> efficiency. Although some people believe that operator[]
> should be defined for all inputs and at() should be the
> alternative with undefined behavior, this is how things are as
> of today."

As a general response, of course, you're right. As an answer to
the suggestion that the behaviors of at() and operator[]()
should be inversed, however...

> And I still don't understand how C++, being rife with soft
> errors that travel below the type system's radar and dynamic
> checks alike, is safer than Java, which has no memory errors
> and no undefined behavior. Please illuminate me.

First, of course, the claim that Java has no undefined behavior
is not true. It very definitly has undefined behavior,
explicitly in the case of threading, and in practice, also
because the specification isn't always as precise as one would
like.

The major point, however, is that Java never gives you a choice.
No critical application would ever use dynamic loading, for
example, because of the problems involving versioning; Java
requires it, and at the level of the class (or at the least, the
package). The critical applications I've worked on have made
extensive use of programming by contract, with non-virtual
public functions enforcing the contract before calling virtual
private functions -- a technique forbidden in Java, where an
"interface" can have no implementation code whatever, and
private functions cannot be virtual. On large projects, the
interface (in the classical sense of the word, not the
specialized Java sense -- the .hh files in the C++
implemenation) has been under the control of a central
architecture committee, and was only editable with explicit
authorisation. Java requires that the code and the interface be
in the same file, so anytime you can edit one, you can modify
the other. (Admittedly, this last point is more concerned with
project managability than with safety, and really only concerns
large projects.) And I don't think I have to explain to you the
advantages of destructors over finally blocks.

And of course, in Java, even critical failures, like a
VirtualMachineError, are exceptions, so you are forced to
continue executing (at least the finally blocks), even when you
know that the underlying machine is not working correctly.

Obviously, that doesn't mean that everything in Java is bad. I
regularly use garbage collection in C++, just as I regularly use
checking implementations of the STL, but as you correctly point
out, neither of these are requirements of the standard. And of
course, avoiding undefined behavior in C++ is a real art;
strictly specifying the order of evaluation in an expression
would go a long way to solving this. But globally, the problems
are less than in Java.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Dec 9, 2005, 10:51:37 AM12/9/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> Branimir Maksimovic wrote:
> > Peter Most wrote:

> >>Wouldn't it be easier to simply define that operator[] has
> >>to check the index, then it would be one less undefined
> >>behavior?

> > Main reason I'm not using 'at' is that it throws exception.
> > What to do with that exception? what action to perform?
> > try ' ing around vectors 'at' is just silly.

> But the idea wasn't to try around 'at'. There are tons of
> cases in which you can centralize error handling and return
> the system to a stable state.

If we're talking about "expected" errors, yes. The problem is
the "impossible" errors; the ones that cannot happen in correct
code.

> Simple example: I load some data in a vector<Data> and an
> index into the data into another vector<unsigned>. If the
> data is corrupt (an event of low likelihood but possible
> nonetheless), at() will throw an exception, I can report (in a
> centralized manner) that the data is corrupt, clear the index,
> and rebuild or whatnot.

Where does the data in the index vector come from? What makes
it corrupt? If the index data is, for example, read from disk,
I would tend to prefer validating it immediately after reading
it, but I can imagine cases where a "lazy" validation would be
appropriate, and in such cases, why not let vector<>::at() do
it? If, however, I've validated it, and at() still throws, what
does that tell me about the program state. *Only* that
something is wrong. No more. Whatever corrupted the data in
the index array has probably corrupted a lot of other things as
well -- if some code is writing random memory, it's rare that
the very first write will touch something that you'll immediatly
detect.

> I disagree that the only thing to do in the case of an
> out-of-bound access is to crash immediately. I think it's a
> natural outcome of our history: we C++ programmers are so used
> to unchecked array access, we have in mind that out-of-bounds
> access indicates a deep application error - and we develop
> programs like that.

In most of the code I write, an out of bounds access does
indicate a deep application error. That was true when I wrote
Java as well, but as you say, that could just be because of
deeply ingrained habits. The point is, however, it either is or
it isn't, and the programmer must know this. If he hasn't
thought about the issue, then it likely is a deep application
error -- he didn't think that it was possible. But I can easily
imagine (or rather, I can easily imagine that other people can
imagine) cases where an illegal index is part of the design.
Some of the Java programmers I worked with found it fully normal
to read user input and use it directly as an index into an
array, counting on catching the exception to report illegal
input.

> However, if we know that bounds checking is performed reliably
> and reproductibly, we can develop programming styles that
> allow better centralization of error handling.

I can understand that point of view. That there could be two
types of indexing errors, those that can't happen, and those
that can. When those that can't happen do, the only acceptable
behavior is to stop the program as quickly as possible -- no
stack unwinding or any of that stuff. And as you say, if you've
working with C and C++ -- where an indexing error is undefined
behavior -- you're definitly not in the habit of using it to
catch errors that you know can happen.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Kurt Stege

unread,
Dec 9, 2005, 10:58:12 AM12/9/05
to
Peter Most wrote:

> Wouldn't it be easier to simply define that operator[] has to check the
> index, then it would be one less undefined behavior?

No. This far it helps nothing. Specifying that the implementation
has to check for a condition but not specifying what to do in
that case does not work.

Than, it would be allowed to do the same as it is now. That doesn't
help you in any way.

Even more, using the general "as-if" rule, the compiler would be
allowed to remove the test, for you can't see using the specified
behaviour, if the test is executed or not.


As a consequence, and probably you meant it this way, you don't
have only to specify to check the index, but you have to specify
what to do when the test fails.

As you see in this discussion, this decision is not easy.
One user likes to get a core dump, another user likes to
get most high speed for the release version, an implementer
would like to start a debugger, ...


No, it is easier just to specify undefined behaviour, as the
current standard does...

Best regards,
Kurt.

Peter Most

unread,
Dec 9, 2005, 1:37:09 PM12/9/05
to
Kurt Stege wrote:

> Peter Most wrote:
>
>> Wouldn't it be easier to simply define that operator[] has to check the
>> index, then it would be one less undefined behavior?
>
> No. This far it helps nothing. Specifying that the implementation
> has to check for a condition but not specifying what to do in
> that case does not work.
>

I meant to reverse the semantics i.e.:
1) operator[] checks the index and throws an out_of_range exception (as at()
is currently doing).
2) at() isn't checking anything and maybe crashes (as operator[] is
currently doing).

[snip]

> As you see in this discussion, this decision is not easy.
> One user likes to get a core dump, another user likes to
> get most high speed for the release version, an implementer
> would like to start a debugger, ...
>

I have to admit, I hadn't expect so much different opinions about a
supposedly simple thing like the vector access. But it gave me a lot more
insight for the *very* different environments where C++ is used.

Kind regards Peter

Peter Most

unread,
Dec 9, 2005, 1:36:48 PM12/9/05
to
kanze wrote:

> Part of the problem is that C++ is designed to be statically
> compiled. It should be possible to define a portable "byte
> code" with an interpreter for it, but to my knowledge, no one
> has done so. It would be very difficult for a browser to run
> code compled for Windows on a PC on my Sparc under Solaris. THe
> current state of affairs is that to run C++ directly, my browser
> would have to download the sources, and compile and link them.
>
I'm not so familiar with the newer microsoft compilers, but isn't the
managed C++ compiler from Microsoft targeting the CLR?
See also: http://en.wikipedia.org/wiki/Managed_Extensions_for_C_Plus_Plus
And if Mono progresses as planned, then it would also run under Linux.

Peter Most

unread,
Dec 9, 2005, 4:17:58 PM12/9/05
to
kanze wrote:

>> I think only small, short running programs can "live" with a
>> crash. But all other programs simply can not accept a crash.
>
> Realistically, all programs must be able to live with a crash.
> At least on my system, dereferencing an out of bounds pointer
> will cause a crash, as will accessing through a mis-aligned
> pointer. And there's really no way to catch the first of these.
>

The application I was working on was multithreaded, where the threads where
quite independent from each other, meaning if one thread crashes then the
program could and should continue to work with the other threads. But the
usual behavior for threads makes this very difficult, because if one thread
crashes then the complete process is crashing. In this scenario I prefer a
thrown exception which I can catch and log in a toplevel handler and then
let the tread die gracefully so that the other threads can still continue.

> In critical applications, it is an absolute rule that in the
> slightest doubt concerning the correction of the program, it
> must crash. Stumbling on with possibly incorrect data is not
> considered an alternative.
>
> The same thing holds for any number of other applications. Most
> of the time, wrong answers are worse than no answers.
>
> There are clearly exceptions: the most obvious is a demo program
> for an exposition, where a crash is highly visible, and no one
> is going to use the results, or even verify that they are
> correct. But any time the results are to be used for anything
> significant, it is important that they be correct, and crashing
> is preferable to wrong results.
>

That's a point I still don't quite understand. If I access the vector with a
wrong index and an exception is thrown, how can it be that the program is
continuing with wrong data? This would only be possible if the developer
deliberately and irresponsible catches the exception and tries to muddle
trough with some made up and hence wrong data. Are you talking about such a
scenario?

Kind regards Peter

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:25:45 PM12/9/05
to

Ok, it seems like we reached agreement. Let me push my luck a little bit
more: it then looks we'd need two indexing operators:

1. Out-of-bounds operator[] stops execution in an implementation-defined
manner.

2. Out-of-bounds at() throws an exception.

3. Illegal indexing into a random iterator has undefined behavior.

The problem I have with the unchecked operator[] is that it is the most
convenient to use, and therefore the most used. The default should be
safety because most of the time we want safety. Only rarely we need
absolute speed, and therefore the unsafe speed should be achievable only
with extra syntax.


Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:21:00 PM12/9/05
to
kanze wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>>The context is: why is operator[] undefined for invalid input?
>
>
>>Answer: It is undefined so implementations can check and crash
>>the programs.
>
>
>>That answer is bogus. If the behavior is undefined, that means
>>I can't count on it. You can't count on it. We can't count on
>>it. There's only one who can count on it, and that's "Nobody".
>>:o)
>
>
> That's one way of looking at it, and I agree that requiring a
> program to crash every time there is undefined behavior would be
> a major step forward. (Given the difficulty of detecting some
> of the cases, I'm not sure that people more concerned than I am
> about performance would agree:-).)
>
> The other way is to use an implementation that you know, which
> does give the desired behavior. Said implementation would, of
> course, still be conform.

But advising one to lock into an implementation is exactly what
experienced people know is not good. They advocate writing portable
code. You've done the same in the past. Why the sudden change of view in
this particular case?

> As a general response, of course, you're right. As an answer to
> the suggestion that the behaviors of at() and operator[]()
> should be inversed, however...

We need safety more than we need speed. The easiest solution
syntactically should be safe. Speed should be attainable via extra syntax.

>>And I still don't understand how C++, being rife with soft
>>errors that travel below the type system's radar and dynamic
>>checks alike, is safer than Java, which has no memory errors
>>and no undefined behavior. Please illuminate me.
>
>
> First, of course, the claim that Java has no undefined behavior
> is not true. It very definitly has undefined behavior,
> explicitly in the case of threading, and in practice, also
> because the specification isn't always as precise as one would
> like.

Wrong. Check Java 1.5. It defines behavior of even incorrect threaded
programs.

Undefined behavior in Java has a very different meaning. It does NOT
mean memory errors. For example, sorting with a wrongly-written
comparitor has "undefined behavior" in Java because the collection may
get sorted in an arbitrary order, or the call to sort may never terminate.

> The major point, however, is that Java never gives you a choice.
> No critical application would ever use dynamic loading, for
> example, because of the problems involving versioning; Java
> requires it, and at the level of the class (or at the least, the
> package).

I suppose it's easy to check that the version is the expected one? Not
convinced that that's a major problem. Sounds like Java makes it
impossible for you to make sure you're using the right packages. In the
worset case, there's plenty of extralinguistic security features that
allow you to lock a program to a specific set of files.

> The critical applications I've worked on have made
> extensive use of programming by contract, with non-virtual
> public functions enforcing the contract before calling virtual
> private functions -- a technique forbidden in Java, where an
> "interface" can have no implementation code whatever, and
> private functions cannot be virtual.

That's entirely obscure. The same behavior can be achieved through a ton
of other techniques.

Abstract classes, dual interfaces, forwarding, package-level hiding,
inner classes,... come to mind.

> On large projects, the
> interface (in the classical sense of the word, not the
> specialized Java sense -- the .hh files in the C++
> implemenation) has been under the control of a central
> architecture committee, and was only editable with explicit
> authorisation. Java requires that the code and the interface be
> in the same file, so anytime you can edit one, you can modify
> the other. (Admittedly, this last point is more concerned with
> project managability than with safety, and really only concerns
> large projects.)

You can organize the files such that the interfaces and abstract classes
correspond to your interface concept, and isolate them.

If worse comes to worse, you can do a little file manipulation with
external tools. You have to do lots of that in a large project anyway.

> And I don't think I have to explain to you the
> advantages of destructors over finally blocks.

I do think you'd have to explain to me how that is a make-or-break for
*critical projects*. *Safety*. This discussion is not "Java vs. C++".
It's "Java is not usable for critical-mission projects". I do like
destructors, they are a nifty feature, but I fail to see how they make
or break safety.

Besides, C++ has no way of implementing "finally", and I found myself
enjoying to execute code upon scope exit within the context of that
scope. That feature is not attainable with C++ in any reasonable way.

> And of course, in Java, even critical failures, like a
> VirtualMachineError, are exceptions, so you are forced to
> continue executing (at least the finally blocks), even when you
> know that the underlying machine is not working correctly.

That's definitely worse than obliterating some arbitrary memory location
and continuing execution *thinking all is ok*... or is it? :o)

> Obviously, that doesn't mean that everything in Java is bad. I
> regularly use garbage collection in C++, just as I regularly use
> checking implementations of the STL, but as you correctly point
> out, neither of these are requirements of the standard. And of
> course, avoiding undefined behavior in C++ is a real art;
> strictly specifying the order of evaluation in an expression
> would go a long way to solving this. But globally, the problems
> are less than in Java.

That mellows your initial provocative statement quite some, but I assert
that that statement is simply untenable.

At every line of code, C++ offers you a vast array of means to destroy
the most solid design, in ways that are undetectable during compilation,
with static checking with any existing tool, or via dynamic checks. How
come destructors and virtual private functions make up for that?


Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:24:03 PM12/9/05
to
Kurt Stege wrote:
> Peter Most wrote:
>
>
>>Wouldn't it be easier to simply define that operator[] has to check the
>>index, then it would be one less undefined behavior?
>
>
> No. This far it helps nothing. Specifying that the implementation
> has to check for a condition but not specifying what to do in
> that case does not work.
>
> Than, it would be allowed to do the same as it is now. That doesn't
> help you in any way.

Ok, let's not nitpick. I suppose Peter meant "operator[] has to check
the index and do something sensible in case the index is invalid". It
remains to figure out what is sensible to do.

Please note that right now the code can do *anything*, including
obliterating otherwise valid data that sits next to vector's data. That
is *not* sensible. At all.

> Even more, using the general "as-if" rule, the compiler would be
> allowed to remove the test, for you can't see using the specified
> behaviour, if the test is executed or not.

Of course. That shouldn't be part of the discussion.

> As a consequence, and probably you meant it this way, you don't
> have only to specify to check the index, but you have to specify
> what to do when the test fails.

Likely so.

> As you see in this discussion, this decision is not easy.
> One user likes to get a core dump, another user likes to
> get most high speed for the release version, an implementer
> would like to start a debugger, ...
>
>
> No, it is easier just to specify undefined behaviour, as the
> current standard does...

But that's totally not the conclusion that the premise above it leads
naturally towards. The conclusion is "stops execution in a manner
defined by the implementation." Not "undefined behavior."


Andrei

Walter Bright

unread,
Dec 9, 2005, 4:24:51 PM12/9/05
to

"kanze" <ka...@gabi-soft.fr> wrote in message
news:1134136164.9...@g47g2000cwa.googlegroups.com...

> There are clearly exceptions: the most obvious is a demo program
> for an exposition, where a crash is highly visible, and no one
> is going to use the results, or even verify that they are
> correct. But any time the results are to be used for anything
> significant, it is important that they be correct, and crashing
> is preferable to wrong results.

My favorite exception is a standalone DVD player. Since there's no
possibility of updating the software on it, I'd rather it soldiered on
trying to play the DVD, regardless of if it's displaying corrupted data or
not. There's not much worse than having some friends over watching a movie,
and having the DVD player quit with "corrupted data!" leaving you to try to
skip over it with the fast forward before it hangs at the crucial spot.

It's interesting how software has embedded itself into everything. I have a
TV that needs to be "rebooted" by cycling the power every now and then.

Roland Pibinger

unread,
Dec 9, 2005, 4:19:55 PM12/9/05
to
On 8 Dec 2005 19:01:23 -0500, David Abrahams

<da...@boost-consulting.com> wrote:
>On some platforms the appropriate behavior in case of such a
>programming error is... hope that it never occurs, because we can't
>afford the cost of checking.
[...]

>In this case it means, "on some implementations it may be important
>not to pay for the check, and on some others we don't want to restrict
>the implementation's choice of behavior in case a check fails"

The problem are half-encapsulated classes (functions, templates) that
half-protect the user against misuse. Although that may be an
acceptable compromise for low-level code like STL it should be
considered bad style in general.

Best regards,
Roland Pibinger

Branimir Maksimovic

unread,
Dec 9, 2005, 4:21:44 PM12/9/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> Branimir Maksimovic wrote:
>
>>Peter Most wrote:
>>
>>
>>>Wouldn't it be easier to simply define that operator[] has to check the
>>>index, then it would be one less undefined behavior?
>>>
>>
>>
>>Main reason I'm not using 'at' is that it throws exception.
>>What to do with that exception? what action to perform?
>>try ' ing around vectors 'at' is just silly.
>
>
> But the idea wasn't to try around 'at'. There are tons of cases in which
> you can centralize error handling and return the system to a stable state.
>
> Simple example: I load some data in a vector<Data> and an index into the
> data into another vector<unsigned>. If the data is corrupt (an event of
> low likelihood but possible nonetheless), at() will throw an exception,
> I can report (in a centralized manner) that the data is corrupt, clear
> the index, and rebuild or whatnot.

Yes, this is common idiom when implementing database index files.
vector<unsigned> is vector of records for example and vector<Data>
is vector of keys that point within records.
Example is excelent but unfortuantelly bounds checking
is useless in this case.
If Key is corrupted it will probably point within data record
file most of the time. So checksum of key pointer is real
answer here. If key is loaded and checksum fails then
exception is thrown, which will result in rebuilding index file.
Bounds check would be ok if all invalid keys point
out of bounds which is not the case.

>
> I disagree that the only thing to do in the case of an out-of-bound
> access is to crash immediately. I think it's a natural outcome of our
> history: we C++ programmers are so used to unchecked array access, we
> have in mind that out-of-bounds access indicates a deep application
> error - and we develop programs like that.

I completelly agree. It's just I've never have a case that
out of bounds error is something else but fatal error.

>
> However, if we know that bounds checking is performed reliably and
> reproductibly, we can develop programming styles that allow better
> centralization of error handling.

I am not worried about out of bounds errors. It's ones that
are not out of bounds, that point to incorrect data worries
me.

Greetings, Bane.

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:26:36 PM12/9/05
to
Dietmar Kuehl wrote:
> I'm pretty capable of making sure that I have no out of bounds
> accesses when using arrays (of whatever form, be it built-in arrays,
> 'std::vector<T>', 'std::deque<T>', etc.). I don't want any redundant
> checks for whether I did the right thing. I don't mind them if they
> come with no extra cost but it isn't my experience that they do.

I think the question is, would you be willing to write an extra
".unchecked_at()" to avoid that extra cost?

> Even if there are checks, I guess we would choose different
> implementations for how they deal with programming errors: I want the
> implementation to "crash" (i.e. write an inspectable post-mortem image
> and abort); your preference seems to be that an exception is thrown
> such that you can try to continue (which is, IMO, a futile attempt
> anyway: if the programmer is incapable of correctly implementing the
> successful case, how will he be capable of recovering from his own
> errors at all?).

That's an entirely valid point, and it leads to a very interesting
discussion: modular exceptions.

How about defining exceptions such that they cannot be handled but in a
restricted set of modules? They should pass right through the other
modules without any chance of being caught, and make it to a few
carefully designed modules that handle them. Such a design can be
partially attained by defining exception classes in unnamed namespaces.
Still, catch (...) will eat them :o(.


Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:26:58 PM12/9/05
to
Peter Most wrote:
> Kurt Stege wrote:
>
>
>>Peter Most wrote:
>>
>>
>>>Wouldn't it be easier to simply define that operator[] has to check the
>>>index, then it would be one less undefined behavior?
>>
>>No. This far it helps nothing. Specifying that the implementation
>>has to check for a condition but not specifying what to do in
>>that case does not work.
>>
>
> I meant to reverse the semantics i.e.:
> 1) operator[] checks the index and throws an out_of_range exception (as at()
> is currently doing).
> 2) at() isn't checking anything and maybe crashes (as operator[] is
> currently doing).

I think that's a more sensible choice than tha status quo. I also think
the suggestion will not make it into the standard :o).

Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 4:23:41 PM12/9/05
to
kanze wrote:
> Realistically, all programs must be able to live with a crash.
> At least on my system, dereferencing an out of bounds pointer
> will cause a crash, as will accessing through a mis-aligned
> pointer. And there's really no way to catch the first of these.

No way an illegal pointer access will cause a crash.

What if the pointer points to some memory that is within the right
address space, but of the wrong type?

void foo() {
float a;
int b[3];
float b;
int * p = b + 3;
// *p will likely obliterate either a or b
}

That won't cause a crash. It will, however, cause soft errors.

> In critical applications, it is an absolute rule that in the
> slightest doubt concerning the correction of the program, it
> must crash. Stumbling on with possibly incorrect data is not
> considered an alternative.

Sure. That leads to the idea that operator[] must be checked and
execution must be aborted if the check fails, not that it yields
undefined behavior.

I think Peter has made an innocent semantic mistake, that other posters
pick on. The way I read what he's saying, when he says "checking", he
really means "checking and taking contingency measures if the check
fails". Many people seem to imply that he requres throwing an exception
as the only acceptable contingency measure.

Speaking for me at least, I'm interested in making behavior defined for
the "usual" case and allow unsafe performance with extra syntactic effort.

It's a pity that the discussion took so many meanders just to get to its
point. Instead of discussing whether behavior should be defined or
not, we discuss whether throwing or aborting is the best thing to do.
Somehow it is being aired in this thread that "checking" means
"throwing", and that "undefined behavior" naturally leads to "aborting".
That is simply false.

Checking is good. Defined behavior is good. Unchecked, undefined, are
bad. Slow is also bad, and that's the tension we are fighting against.


Andrei

Branimir Maksimovic

unread,
Dec 9, 2005, 9:05:23 PM12/9/05
to
Peter Most wrote:
> kanze wrote:
>
>
>>>I think only small, short running programs can "live" with a
>>>crash. But all other programs simply can not accept a crash.
>>
>>Realistically, all programs must be able to live with a crash.
>>At least on my system, dereferencing an out of bounds pointer
>>will cause a crash, as will accessing through a mis-aligned
>>pointer. And there's really no way to catch the first of these.
>>
>
> The application I was working on was multithreaded, where the threads where
> quite independent from each other, meaning if one thread crashes then the
> program could and should continue to work with the other threads. But the
> usual behavior for threads makes this very difficult, because if one thread
> crashes then the complete process is crashing. In this scenario I prefer a
> thrown exception which I can catch and log in a toplevel handler and then
> let the tread die gracefully so that the other threads can still continue.
>


I would do it this way: when thread finds fatal error eg out of bounds,
it forks then aborts in child to produce core dump for debugging
purposes, while in parent it throws exception which is handled
in top level code, which then just picks another task and
continues to work. there would be also counter of fatal
errors for each module, when sufficient number is reached
threads would avoid execution of such module and
start sending some mail and SMS to author of module.

Greetings, Bane.

Thorsten Ottosen

unread,
Dec 9, 2005, 9:06:18 PM12/9/05
to
Peter Most wrote:
> kanze wrote:

>>There are clearly exceptions: the most obvious is a demo program
>>for an exposition, where a crash is highly visible, and no one
>>is going to use the results, or even verify that they are
>>correct. But any time the results are to be used for anything
>>significant, it is important that they be correct, and crashing
>>is preferable to wrong results.
>>
>
> That's a point I still don't quite understand. If I access the vector with a
> wrong index and an exception is thrown, how can it be that the program is
> continuing with wrong data?

Why did the program use a wrong index in the first place? Something og
the calling code must be wrong ... can you prove which part it is?

-Thorsten

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 9:08:43 PM12/9/05
to
Branimir Maksimovic wrote:
> I am not worried about out of bounds errors. It's ones that
> are not out of bounds, that point to incorrect data worries
> me.

I think the errors that we should be worried about are nonlocal errors
caused by obliterating data of a different type sitting innocently next
to data manipulated by buggy code.

Memory protection has solved that at the OS level. We should strive for
solving that at the language level, too.


Andrei

Walter Bright

unread,
Dec 9, 2005, 9:09:35 PM12/9/05
to

"Andrei Alexandrescu (See Website For Email)"
<SeeWebsit...@moderncppdesign.com> wrote in message
news:Ir8vF...@beaver.cs.washington.edu...

> The problem I have with the unchecked operator[] is that it is the most
> convenient to use, and therefore the most used. The default should be
> safety because most of the time we want safety. Only rarely we need
> absolute speed, and therefore the unsafe speed should be achievable only
> with extra syntax.

I agree, this is the sensible approach. Too often in C++, the
straightforward intuitive way to do something is the least safe and most
deprecated way. (Virtualness of destructors comes to mind.)

Walter Bright
www.digitalmars.com C, C++, D programming language compilers

Walter Bright

unread,
Dec 9, 2005, 9:10:32 PM12/9/05
to

"Branimir Maksimovic" <bm...@hotmail.com> wrote in message
news:dnck3n$n0c$1...@domitilla.aioe.org...

> I completelly agree. It's just I've never have a case that
> out of bounds error is something else but fatal error.

Consider this:

try
{
for (i = 0; 1; i++)
array[i] = ...;
}
catch (ArrayBoundsException)
{
}

If array is a large array, that version of the loop will be significantly
faster than:

for (i = 0; i < array.length; i++)
array[i] = ...;

because in the latter the array bounds check gets done twice per iteration,
rather than once. The redundant check can sometimes be eliminated by a more
advanced data flow analysis optimizer, but not always.

Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andrei Alexandrescu (See Website For Email)

unread,
Dec 9, 2005, 9:12:16 PM12/9/05
to
Walter Bright wrote:
> "kanze" <ka...@gabi-soft.fr> wrote in message
> news:1134136164.9...@g47g2000cwa.googlegroups.com...
>
>>There are clearly exceptions: the most obvious is a demo program
>>for an exposition, where a crash is highly visible, and no one
>>is going to use the results, or even verify that they are
>>correct. But any time the results are to be used for anything
>>significant, it is important that they be correct, and crashing
>>is preferable to wrong results.
>
>
> My favorite exception is a standalone DVD player. Since there's no
> possibility of updating the software on it, I'd rather it soldiered on
> trying to play the DVD, regardless of if it's displaying corrupted
data or
> not. There's not much worse than having some friends over watching a
movie,
> and having the DVD player quit with "corrupted data!" leaving you to
try to
> skip over it with the fast forward before it hangs at the crucial spot.

I'd also prefer a cell phone conversation with hiccups than one that's
interrupted because the packet processor has a bug.

Andrei

Alf P. Steinbach

unread,
Dec 9, 2005, 9:18:11 PM12/9/05
to
* Andrei Alexandrescu (See Website For Email):

>
> Besides, C++ has no way of implementing "finally", and I found myself
> enjoying to execute code upon scope exit within the context of that
> scope. That feature is not attainable with C++ in any reasonable way.

OK, heated discussion, so the first sentence, "no way", is presumably an
immediate that's-how-I-feel-about-it, and the the second sentence, "[no]
reasonable way", a more intellectual correction.

I think that statement was true just a few years ago, because of lack of
compiler support. Translated from the [no.it.programmering.c++] FAQ,
<url: http://utvikling.com/cppfaq/04/04/02/index.html>, (Norwegian):

"Even if this is no permanent solution, absolutely not recommended,
and not supported by all C++ compilers, here's a program that shows
how to simulate finally:"

(Interestingly you're directly mentioned in that FAQ item! Written
mostly by me! And now I see that all should have been timestamped! :-))

Given that compilers now generally do support this technique, as I see
it the remaining unreasonability must rest with the awkwardness of the
construction, the slight inefficiency, and/or the fragility of relying
on convention: could you elaborate on what you find so unreasonable that
in the heat of the moment you write "no way"?

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

gerg

unread,
Dec 9, 2005, 9:17:37 PM12/9/05
to
i beleive that a C++ which used the exception behaviour of Java would
be great for the language. I the example mentioned above the ignorant
programmer would not be able to compile his code using the [] operator
unless he caught a "invalid access" exception.

Bo Persson

unread,
Dec 9, 2005, 9:16:04 PM12/9/05
to

"Peter Most" <Peter...@gmx.de> skrev i meddelandet
news:4399b714$0$27899$9b4e...@newsread4.arcor-online.net...

> kanze wrote:
>
>> Part of the problem is that C++ is designed to be statically
>> compiled. It should be possible to define a portable "byte
>> code" with an interpreter for it, but to my knowledge, no one
>> has done so. It would be very difficult for a browser to run
>> code compled for Windows on a PC on my Sparc under Solaris. THe
>> current state of affairs is that to run C++ directly, my browser
>> would have to download the sources, and compile and link them.
>>
> I'm not so familiar with the newer microsoft compilers, but isn't
> the
> managed C++ compiler from Microsoft targeting the CLR?

But that is not ISO C++, but Managed C++ or ECMA C++/CLI. These are
all totally different languages with major syntactic and semantic
differences, just like Java and C#.

> See also:
> http://en.wikipedia.org/wiki/Managed_Extensions_for_C_Plus_Plus

> And if Mono progresses as planned, then it would also run under
> Linux.

Theoretically, yes. Practically, maybe not.


Bo Persson

Thorsten Ottosen

unread,
Dec 9, 2005, 10:03:06 PM12/9/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> kanze wrote:
>

>>The critical applications I've worked on have made
>>extensive use of programming by contract, with non-virtual
>>public functions enforcing the contract before calling virtual
>>private functions -- a technique forbidden in Java, where an
>>"interface" can have no implementation code whatever, and
>>private functions cannot be virtual.
>
>
> That's entirely obscure. The same behavior can be achieved through a ton
> of other techniques.
>
> Abstract classes, dual interfaces, forwarding, package-level hiding,
> inner classes,... come to mind.

what is dual interfaces?

Anyway, all of those things seems like poor hacks. Abstract classes
usually won't work because you don't have multiple inheritance.

> I do think you'd have to explain to me how that is a make-or-break for
> *critical projects*. *Safety*. This discussion is not "Java vs. C++".
> It's "Java is not usable for critical-mission projects". I do like
> destructors, they are a nifty feature, but I fail to see how they make
> or break safety.

when you only have to implement the "finally" once in the destructor,
it makes it much less likely that you'll miss it. Repetition is evil
and finally clauses readily support that.

> Besides, C++ has no way of implementing "finally", and I found myself
> enjoying to execute code upon scope exit within the context of that
> scope. That feature is not attainable with C++ in any reasonable way.

try
{
...
}
catch( ... )
{
finally();
throw;
}

?

-Thorsten

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 6:29:04 AM12/10/05
to
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)" <SeeWebsit...@moderncppdesign.com> writes:
>>As far as I've known for the longest time, "undefined" often stands for
>>"for the sake of generating the fastest code possible when the program
>>has no error."

>
>
> In this case it means, "on some implementations it may be important
> not to pay for the check, and on some others we don't want to restrict
> the implementation's choice of behavior in case a check fails"

...which is no different from what I said. I thought "Me too" posts are
not recommended by moderators :o).

{Paraphrasing can add clarity which makes it not a 'me too'. Of course
the moderators should have rejected this post as well as 'adding nothing
new' :-) but this moderator thought that a little clarification might be
of general interest. -mod/fwg}


Andrei

Francis Glassborow

unread,
Dec 10, 2005, 9:09:01 AM12/10/05
to
In article <Ir8y4...@beaver.cs.washington.edu>, "Andrei Alexandrescu
(See Website For Email)" <SeeWebsit...@moderncppdesign.com> writes
>> I meant to reverse the semantics i.e.:
>> 1) operator[] checks the index and throws an out_of_range exception (as at()
>> is currently doing).
>> 2) at() isn't checking anything and maybe crashes (as operator[] is
>> currently doing).
>
>I think that's a more sensible choice than tha status quo. I also think
>the suggestion will not make it into the standard :o).

You can absolutely bet on that. Unfortunately as one of the functions is
an operator we do not even have the option of providing an extra
parameter that defaults to the current status. In simple terms we will
have to improve our education of C++ programmers by warning them that
STL implementations of operator[] have the same problem that operator[]
has for raw arrays.

Now that last sentence may explain why we have the current requirements,
in general user defined operators should behave like built in ones.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Felipe Magno de Almeida

unread,
Dec 10, 2005, 8:09:40 AM12/10/05
to
Non-virtual destructors, IMO, has a strong meaning. Which helps to
self-document code.
When one sees a class with a virtual destructor it knows it is using
polymorphism or alike with this,
but one sees a class without any virtual functions, he must really
think if inheriting from it is really
a good idea. IMHO, it really helps designing software in C++.

best regards,

Seungbeom Kim

unread,
Dec 10, 2005, 8:02:07 AM12/10/05
to
Walter Bright wrote:
> "Andrei Alexandrescu (See Website For Email)"
> <SeeWebsit...@moderncppdesign.com> wrote in message
> news:Ir8vF...@beaver.cs.washington.edu...
>
>>The problem I have with the unchecked operator[] is that it is the most
>>convenient to use, and therefore the most used. The default should be
>>safety because most of the time we want safety. Only rarely we need
>>absolute speed, and therefore the unsafe speed should be achievable only
>>with extra syntax.
>
> I agree, this is the sensible approach. Too often in C++, the
> straightforward intuitive way to do something is the least safe and most
> deprecated way. (Virtualness of destructors comes to mind.)

I don't disagree, but let me put it this way: it is a conflict between
having orthogonal features vs having them in the most commonly used way.

No member function is virtual unless explicitly declared so or inherited
from a virtual one, and it's just that a destructor is no exception. A
function's being virtual and its being a destructor are orthogonal and
do not affect each other. It makes the language neater and easier to
explain and understand, as there's one less (special) rule to remember.
On the other hand, in practice we often need the destructor to be
virtual if any member function is virtual; hence the "guideline" such as
"Declare your destructor virtual if any member function is virtual." -
one more to remember.

This may not be the best example, but a const variable's ability to
participate in an integral constant-expression(ICE) is where C++ takes
the other direction: a non-const variable cannot be used as an ICE while
a const variable can. This is a special rule, and a variable's constness
and its ability to participate in an ICE are not orthogonal. It helps
you avoid the dangerous preprocessor macros for constants, but at the
same time adding or removing a const may affect the program in a rather
obscure way and can create tricky corner cases.

I feel that many advantages and disadvantages of C++, as it is, come
from trying to be orthogonal wherever possible. But, which one is
better, is an open question, I think. :)

--
Seungbeom Kim

Francis Glassborow

unread,
Dec 10, 2005, 9:09:22 AM12/10/05
to
In article <3vufgoF...@individual.net>, Bo Persson <b...@gmb.dk>
writes

>But that is not ISO C++, but Managed C++ or ECMA C++/CLI. These are
>all totally different languages with major syntactic and semantic
>differences, just like Java and C#.

Which is one reason that the BSI (UK National Body) C++ Panel is deeply
unhappy with the name being applied to this new language. C++ is the
name of an ISO Standard language that is maintained by SC22/WG21. We
should not tolerate other Standards bodies (or worse still, ISO itself
via the fast track process) confusing people with 'C++/CLI. The owners
of the Java marque went bananas (and, IIRC, took legal action) over
those that wanted to use Java as the name for some extended version of
the language.

It has been difficult enough over the years to educate people that
'Visual C++' is just an implementation of C++ (albeit one with many
extensions) and not a distinct language. What hope have we of now
reversing the process and teaching people that 'Managed C++' or
'C++/CLI' is not C++.

Because C++ was very nearly a superset of C we got away with calling it
C++ rather than, for example, D but the differences between C++/CLI and
C++ are rather more extensive and the name is not indicative of that.


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Peter Most

unread,
Dec 10, 2005, 9:11:26 AM12/10/05
to
Bo Persson wrote:

>> I'm not so familiar with the newer microsoft compilers, but isn't
>> the
>> managed C++ compiler from Microsoft targeting the CLR?
>
> But that is not ISO C++, but Managed C++ or ECMA C++/CLI. These are
> all totally different languages with major syntactic and semantic
> differences, just like Java and C#.
>
>> See also:
>> http://en.wikipedia.org/wiki/Managed_Extensions_for_C_Plus_Plus
>

As I said above, I'm not very familiar with it, but from what I've read it's
pretty much normal C++ with some minor changes. But it's not fully
supporting ISO C++ or even certified by ISO, so in this regard you are
right.
Nonetheless it is a C++ compiler which targets a kind of virtual machine and
that's what I wanted to show.


>
>
>> And if Mono progresses as planned, then it would also run under
>> Linux.
>
> Theoretically, yes. Practically, maybe not.
>

Well I've played with C#/Mono a little bit and I think they are not to far
away from fully supporting the CLR. But this is of course a different
discussion and probably a little off topic. But it's worth observing how
C++ could evolve to reach new platforms and survive against allegedly more
modern languages like Java/C#.
Wouldn't it be very interesting if there were a C++ compiler which targets
the JVM, like Jython? This would open up a now world of possibilities like
uses SWING from C++ and hence a standard GUI.
Does such a compiler exist? I couldn't find one, so if anybody knows more
please post!

Kind regards Peter

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:06:14 AM12/10/05
to
Alf P. Steinbach wrote:
> * Andrei Alexandrescu (See Website For Email):
>
>>Besides, C++ has no way of implementing "finally", and I found myself
>>enjoying to execute code upon scope exit within the context of that
>>scope. That feature is not attainable with C++ in any reasonable way.
>
>
> OK, heated discussion, so the first sentence, "no way", is presumably an
> immediate that's-how-I-feel-about-it, and the the second sentence, "[no]
> reasonable way", a more intellectual correction.
>
> I think that statement was true just a few years ago, because of lack of
> compiler support. Translated from the [no.it.programmering.c++] FAQ,
> <url: http://utvikling.com/cppfaq/04/04/02/index.html>, (Norwegian):
>
> "Even if this is no permanent solution, absolutely not recommended,
> and not supported by all C++ compilers, here's a program that shows
> how to simulate finally:"
>
> (Interestingly you're directly mentioned in that FAQ item! Written
> mostly by me! And now I see that all should have been timestamped! :-))

I was of course well aware of techniques such as the one mentioned. It's
something that I've been mulling over for the longest time, and
ScopeGuard is a pale incarnation of it.

> Given that compilers now generally do support this technique, as I see
> it the remaining unreasonability must rest with the awkwardness of the
> construction, the slight inefficiency, and/or the fragility of relying
> on convention: could you elaborate on what you find so unreasonable that
> in the heat of the moment you write "no way"?

I said "no way" because of the reasons you mentioned, to which I'd add
the added awkwardness in the presence of loops (break and continue will
not properly execute the simulated finally block unless you add more
cruft to the code).

IMHO the programming landscape would be vastly better off if the
language allowed us to plant code to be executed upon some function's
exit. After many years of intense meditation and sitting in lotus, I
strongly believe that much of destructors' usefulness has a lot to do
with exactly that - they are a hook on a scope termination.

Perl has an interesting module (see
http://search.cpan.org/~abergman/Hook-Scope-0.04/Scope.pm) that allows
you to insert code to be executed upon a scope exit, *within the context
of the caller*. The sheer fact that Perl allows one to implement such a
module raised my opinion of Perl greatly.

Hook::Scope's approach is superior to Java's finally because it doesn't
require a block preceding it and can be planted straight within the
current scope to be executed later. That makes it way more powerful and
vastly simplifies idiomatic usage. For me it was quite a change of view
to realize was that, contrary to their name, "finally" blocks don't
belong to the end of the normal course of action; they must be planted
piecemeal and as soon as the need to finalize occurs in the control
flow. That's why Java's finally (which requires a block preceding it) is
fatally flawed.

I believe three constructs would simplify writing robust code:

1. on_scope_exit { ... code .. }

Executes code upon the current scope's exit. Several on_scope_exit
blocks in a scope are executed LIFO.

2. on_scope_success { ... code ... }

Executes code if the current scope is exited normally (no exception
engendered).

3. on_scope_failure { ... code ... }

Executes code if the current scope is exited via an exception.

With such language constructs, writing robust programs would be much
easier. Programmers can easily change the state of the program and leave
safeguards behind them to ensure things are properly cleaned up in case
something fails down the road. Little RAII classes built just for the
sake of undoing stuff (I've seen "IntDecrementer" in my time...) would
not be necessary anymore. Such a construct is so useful, it makes
ScopeGuard very popular in spite of its awkwardness.


Andrei

Bob Bell

unread,
Dec 10, 2005, 8:00:46 AM12/10/05
to
Walter Bright wrote:
> "Branimir Maksimovic" <bm...@hotmail.com> wrote in message
> news:dnck3n$n0c$1...@domitilla.aioe.org...
> > I completelly agree. It's just I've never have a case that
> > out of bounds error is something else but fatal error.
>
> Consider this:
>
> try
> {
> for (i = 0; 1; i++)
> array[i] = ...;
> }
> catch (ArrayBoundsException)
> {
> }
>
> If array is a large array, that version of the loop will be significantly
> faster than:
>
> for (i = 0; i < array.length; i++)
> array[i] = ...;
>
> because in the latter the array bounds check gets done twice per iteration,
> rather than once. The redundant check can sometimes be eliminated by a more
> advanced data flow analysis optimizer, but not always.

I doubt it would be signficantly faster unless "..." is trivial. This
seems more like an abuse of exception handling than an argument in
favor of checked array indices. In any case, I would think the "safer
is more important than faster" criteria would prefer the second
alternative over the first. It may be slower, but it is easier to read
and understand as correct. The first one requires more thinking, and
all it takes is a spurious out of bounds array index from the "..."
(which could be caused by a bug) to ruin the correctness.

I also don't recall ever having an out of bounds index that wasn't
caused by a bug.

Bob

Rob

unread,
Dec 10, 2005, 8:00:12 AM12/10/05
to
gerg wrote:

> i beleive that a C++ which used the exception behaviour of Java would
> be great for the language. I the example mentioned above the ignorant
> programmer would not be able to compile his code using the [] operator
> unless he caught a "invalid access" exception.
>

I disagree. Such a feature of the compiler would reward undisciplined
programming techniques (eg encourage people not to worry about whether
an array index is valid) and add unnecessary penalties for disciplined
programmers. A programmer who *knows* that, by design, his code will
not cause an invalid access would be forced to catch invalid access
exceptions for no reason. This means the compiler would effectively
offer a reward for undisciplined programming techniques.

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:08:12 AM12/10/05
to
Thorsten Ottosen wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>>kanze wrote:
>
>>>The critical applications I've worked on have made
>>>extensive use of programming by contract, with non-virtual
>>>public functions enforcing the contract before calling virtual
>>>private functions -- a technique forbidden in Java, where an
>>>"interface" can have no implementation code whatever, and
>>>private functions cannot be virtual.
>>
>>
>>That's entirely obscure. The same behavior can be achieved through a ton
>>of other techniques.
>>
>>Abstract classes, dual interfaces, forwarding, package-level hiding,
>>inner classes,... come to mind.
>
> what is dual interfaces?
>
> Anyway, all of those things seems like poor hacks. Abstract classes
> usually won't work because you don't have multiple inheritance.

Dual interfaces mean, there's one interface for client code and one
private interface for implementation code.

So... shall we now say that C++ is safe because it allows multiple
inheritance?!?

> when you only have to implement the "finally" once in the destructor,
> it makes it much less likely that you'll miss it. Repetition is evil
> and finally clauses readily support that.

Destructors don't execute within the context of the current scope.
That's sometimes eminently useful. I want finally (actually
on_scope_exit, on_scope_success, and on_scope_failure) *in addition to*
destructors. They don't compete; they serve different purposes.

>>Besides, C++ has no way of implementing "finally", and I found myself
>>enjoying to execute code upon scope exit within the context of that
>>scope. That feature is not attainable with C++ in any reasonable way.
>
> try
> {
> ...
> }
> catch( ... )
> {
> finally();
> throw;
> }
>
> ?

:o}

Try the following inside the try block:

0. fall through the end of the block
1. return
2. break
3. continue
4. goto (ouch!)

None of these fundamental control flow means will execute finally().

A few months ago I had embarked on a macro system for C++ that would
entirely replace the built-in macro system and would be powerful enough
to handle things like typesafe variadic functions, automatic type,
function, and value reification (a la templates, just way cleaner and
more general :o)), user-defined constants, "smart" enums, all sorts of
code transformations, the works. I was enthused! After a few iterations
through the design, I gave up in frustration because I found no simple,
clean AST transformation that would allow implementing on_scope_exit,
on_scope_success, and on_scope_failure.


Andrei

Francis Glassborow

unread,
Dec 10, 2005, 9:08:40 AM12/10/05
to
In article <1134136164.9...@g47g2000cwa.googlegroups.com>,
kanze <ka...@gabi-soft.fr> writes

>In critical applications, it is an absolute rule that in the
>slightest doubt concerning the correction of the program, it
>must crash. Stumbling on with possibly incorrect data is not
>considered an alternative.

How critical is an operating system? Surely such a program should never
crash regardless of how it is abused it needs to recover and continue.
Ideally it should discontinue the process that caused the problem whilst
continuing with everything else. Operating systems that fail to meet
this criterion are not normally considered to be satisfactory (though
most of us have had to endure using such systems.)


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Dietmar Kuehl

unread,
Dec 10, 2005, 7:59:17 AM12/10/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> Dietmar Kuehl wrote:
>> I'm pretty capable of making sure that I have no out of bounds
>> accesses when using arrays (of whatever form, be it built-in arrays,
>> 'std::vector<T>', 'std::deque<T>', etc.). I don't want any redundant
>> checks for whether I did the right thing. I don't mind them if they
>> come with no extra cost but it isn't my experience that they do.
>
> I think the question is, would you be willing to write an extra
> ".unchecked_at()" to avoid that extra cost?

Absolutely not! But I would be willing to create my own container
replacement which uses 'operator[]()' for an unchecked access if
mandatory range checking in standard containers would cause a
performance problem (big deal: I have them floating around anyway;
well, I haven't implemented deque). The use of the operator is very
important due to at least two reasons:

- I want my code be easy to read. I tend to create expressions which
take up the horizontal capacity of my screen using short names and
I think that expressions broken over multiple lines are just
incomprehensible. Long names would make the situation worse.
... and the reason I have long lines already is normally not that
I'm doing to much stuff on one line but that there is already to
much syntactic clutter which cannot reasonably be avoided (e.g.
'typename' keywords).
- Much of my actual algorithmic code is generic code and non-operator
functions are e.g. not applicable to built-in arrays. Thus, it is a
non-starter to try using different names.

On the other hand, I would also provide checked containers to
relative novices, also using 'operator[]()' but a range checked one.
IMO, creating a custom container isn't a big deal and if I have
specific requirements that is exactly what I just do. I would still
appreciate the freedom for the library implementer to choose which
[default] policy suits the targeted user base best.

>> Even if there are checks, I guess we would choose different
>> implementations for how they deal with programming errors: I want the
>> implementation to "crash" (i.e. write an inspectable post-mortem image
>> and abort); your preference seems to be that an exception is thrown
>> such that you can try to continue (which is, IMO, a futile attempt
>> anyway: if the programmer is incapable of correctly implementing the
>> successful case, how will he be capable of recovering from his own
>> errors at all?).
>
> That's an entirely valid point, and it leads to a very interesting
> discussion: modular exceptions.

Do modular exceptions do stack unwinding like normal exceptions do?
Both possible answers cause situations increasing the problem: if
they don't do normal stack unwinding, they cannot restore partially
changed data structures. If they do normal stack unwinding they run
the risk of corrupting the program's state even more.

> How about defining exceptions such that they cannot be handled but in a
> restricted set of modules? They should pass right through the other
> modules without any chance of being caught, and make it to a few
> carefully designed modules that handle them.

Handle them to what end? To continue with corrupted program state?
I doubt that you can come up with carefully designed modules which
recover from yet unknown programming errors (if the programming
errors were known, it would be easier to fix them than to create a
module recovering from them).
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.eai-systems.com> - Efficient Artificial Intelligence

Niklas Matthies

unread,
Dec 10, 2005, 8:10:00 AM12/10/05
to
On 2005-12-10 02:10, Walter Bright wrote:
> "Branimir Maksimovic" <bm...@hotmail.com> wrote:
:

>> I completelly agree. It's just I've never have a case that
>> out of bounds error is something else but fatal error.
>
> Consider this:
>
> try
> {
> for (i = 0; 1; i++)
> array[i] = ...;
> }
> catch (ArrayBoundsException)
> {
> }
>
> If array is a large array, that version of the loop will be
> significantly faster than:
>
> for (i = 0; i < array.length; i++)
> array[i] = ...;
>
> because in the latter the array bounds check gets done twice per
> iteration, rather than once. The redundant check can sometimes be
> eliminated by a more advanced data flow analysis optimizer, but not
> always.

It's fundamentally wrong to (have to) write such code. Programming
languages should cater to the programmer, not the other way round.

Also, this has the usual problem of selective exception catching:
Some other code executed within the loop (i.e. within the "..." or
the assignment operator) could also happen to access arrays (if it
doesn't now, some future program modifications could create such a
situation), and (due to some bug) also throw an ArrayBoundsException,
and you don't want to catch those here. A robust version of this code
would have to be written as:

for (i = 0;; i++)
{
ElemType * e;
try
{
e = &array[i];
}
catch (ArrayBoundsException)
{
break;
}
*e = ...;
}

It would be much better to be able to write

for (i = 0; i < array.length; i++)

{
array.unchecked_access[i] = ...;
}

or even better

for (i = 0; i < array.length; i++)

{
assume(i < array.length); // hint to the optimizer
array[i] = ...;
}

But even this should only be very rarely necessary.

-- Niklas Matthies

James Kanze

unread,
Dec 10, 2005, 4:26:18 PM12/10/05
to
Andrei Alexandrescu (See Website For Email) wrote:
> Branimir Maksimovic wrote:

>>I am not worried about out of bounds errors. It's ones that
>>are not out of bounds, that point to incorrect data worries
>>me.

> I think the errors that we should be worried about are
> nonlocal errors caused by obliterating data of a different
> type sitting innocently next to data manipulated by buggy
> code.

> Memory protection has solved that at the OS level. We should
> strive for solving that at the language level, too.

Intel actually provided the mechanisms to do that at one time;
they might still be present in the current 32 bit architecture,
but as far as I know, no one uses them. (They do have a
negative impact on runtime, when each function call and each
memory access loads a new segment register, with its own set of
protections.)

--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

James Kanze

unread,
Dec 10, 2005, 4:27:02 PM12/10/05
to
Dietmar Kuehl wrote:
> Peter Most wrote:

>>kanze wrote:

>>>Calling operator[] with an out of bounds index is undefined
>>>behavior precisely so that an implementation *can* check the
>>>index, and do whatever is appropriate *on* *that* *platform*
>>>for a programming error. On Unix, I expect a core dump, but
>>>from what I understand, this is not the usual case under
>>>Windows.

>>Wouldn't it be easier to simply define that operator[] has to
>>check the index, then it would be one less undefined behavior?

> I'm pretty capable of making sure that I have no out of bounds
> accesses when using arrays (of whatever form, be it built-in
> arrays, 'std::vector<T>', 'std::deque<T>', etc.). I don't want
> any redundant checks for whether I did the right thing. I
> don't mind them if they come with no extra cost but it isn't
> my experience that they do.

Well, the additional cost is only an issue if the checks are in
a tight loop. And the statistics I've seen for languages which
require such checks is that the compiler is almost always able
to hoist them out of the loop. (Of course, in such languages,
the array type is always built in, and the compiler knows
precisely why the array bounds check is there, and what it is
checking. In cases concerning library types, it would take a
lot more analysis on the part of the compiler. But I think that
it is within reach of the better optimizers today.)

And of course, unless you're a much better coder than I am, such
checks aren't necessarily redundant during the development
phase.

Bjorn Reese

unread,
Dec 10, 2005, 4:24:29 PM12/10/05
to
kanze wrote:

> In critical applications, it is an absolute rule that in the
> slightest doubt concerning the correction of the program, it
> must crash. Stumbling on with possibly incorrect data is not
> considered an alternative.

The rule is that it must enter a safe state. A crash can be a safe
state, but sometimes it is not.

--
mail1dotstofanetdotdk

James Kanze

unread,
Dec 10, 2005, 4:27:44 PM12/10/05
to
gerg wrote:

> i beleive that a C++ which used the exception behaviour of
> Java would be great for the language.

You mean you'ld like for it to become impossible to write a
correct program? Exception safety can only be implemented if
there are certain operations which are guaranteed not to raise
an exception.

> I the example mentioned above the ignorant programmer would
> not be able to compile his code using the [] operator unless
> he caught a "invalid access" exception.

But Java doesn't require this. Java seems to have adopted the
worst of both worlds, by doing some static analsysis, so you pay
for it, and then throwing in a lot of exceptions which mean that
you cannot count on it -- in sum, you don't get what you've paid
for.

I would have preferred that C++ require a static analysis. But
such analysis is only useful if it includes all exceptions.

--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Niklas Matthies

unread,
Dec 10, 2005, 4:36:25 PM12/10/05
to
On 2005-12-10 14:11, Peter Most wrote:
:

> Wouldn't it be very interesting if there were a C++ compiler which
> targets the JVM, like Jython? This would open up a now world of
> possibilities like uses SWING from C++ and hence a standard GUI.
> Does such a compiler exist? I couldn't find one, so if anybody knows
> more please post!

There is at least one C-to-JVM compiler (AMPC, http://www.axiomsol.com/),
and there are C++-to-C compilers, so in theory you could combine them to
get a C++-to-JVM compiler. But this won't get you the mapping of class
types and exceptions you're probably thinking of.

-- Niklas Matthies

James Kanze

unread,
Dec 10, 2005, 4:34:20 PM12/10/05
to
Walter Bright wrote:
> "Andrei Alexandrescu (See Website For Email)"
> <SeeWebsit...@moderncppdesign.com> wrote in message
> news:Ir8vF...@beaver.cs.washington.edu...

>>The problem I have with the unchecked operator[] is that it is
>>the most convenient to use, and therefore the most used. The
>>default should be safety because most of the time we want
>>safety. Only rarely we need absolute speed, and therefore the
>>unsafe speed should be achievable only with extra syntax.

> I agree, this is the sensible approach. Too often in C++, the
> straightforward intuitive way to do something is the least
> safe and most deprecated way. (Virtualness of destructors
> comes to mind.)

The problem, if it is a problem, is that C++ caters to many
different types of users, and allows implementations to vary in
order to do so. Thus, there is no requirement that operator[]
bounds check and abort, because for some users (probably a very
small minority), this would represent an unacceptable runtime
overhead. On the other hand, it is certainly a legal behavior,
and one that I would expect from a quality implementation.

There are, of course, other cases where you are right -- some of
them, of course, due to issues of C compatibility, and others
just historical. On the other hand, from what I've seen of
other languages which try to "correct" these errors, they end up
making things worse.

--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

James Kanze

unread,
Dec 10, 2005, 4:30:06 PM12/10/05
to
Peter Most wrote:
> kanze wrote:

>>>I think only small, short running programs can "live" with a
>>>crash. But all other programs simply can not accept a crash.

>>Realistically, all programs must be able to live with a crash.
>>At least on my system, dereferencing an out of bounds pointer
>>will cause a crash, as will accessing through a mis-aligned
>>pointer. And there's really no way to catch the first of
>>these.

> The application I was working on was multithreaded, where the
> threads where quite independent from each other, meaning if
> one thread crashes then the program could and should continue
> to work with the other threads. But the usual behavior for
> threads makes this very difficult, because if one thread
> crashes then the complete process is crashing. In this
> scenario I prefer a thrown exception which I can catch and log
> in a toplevel handler and then let the tread die gracefully so
> that the other threads can still continue.

Did the threads share data? If not, then why threads, and not
separate processes ? If so, how do you know that the shared
data is not corrupted?

>>In critical applications, it is an absolute rule that in the
>>slightest doubt concerning the correction of the program, it
>>must crash. Stumbling on with possibly incorrect data is not
>>considered an alternative.

>>The same thing holds for any number of other applications.
>>Most of the time, wrong answers are worse than no answers.

>>There are clearly exceptions: the most obvious is a demo
>>program for an exposition, where a crash is highly visible,
>>and no one is going to use the results, or even verify that
>>they are correct. But any time the results are to be used for
>>anything significant, it is important that they be correct,
>>and crashing is preferable to wrong results.

> That's a point I still don't quite understand. If I access the
> vector with a wrong index and an exception is thrown, how can
> it be that the program is continuing with wrong data?

Where did the index come from? Data, I presume. If the data
wasn't corrupted, the index wouldn't be out of bounds.

> This would only be possible if the developer deliberately and
> irresponsible catches the exception and tries to muddle trough
> with some made up and hence wrong data. Are you talking about
> such a scenario?

It's a frequent scenario. But you don't have to be so extreme.
What do your destructors do? You're executing unnecessary code
in an unsure environment.

James Kanze

unread,
Dec 10, 2005, 4:33:39 PM12/10/05
to
Francis Glassborow wrote:
> In article
> <1134136164.9...@g47g2000cwa.googlegroups.com>, kanze
> <ka...@gabi-soft.fr> writes

>>In critical applications, it is an absolute rule that in the
>>slightest doubt concerning the correction of the program, it
>>must crash. Stumbling on with possibly incorrect data is not
>>considered an alternative.

> How critical is an operating system?

Depends on the machine.

> Surely such a program should never crash regardless of how it
> is abused it needs to recover and continue.

Regardless of how it is abused. That's the key. We're not
talking about external abuse here; we're talking about internal
errors. And if an operating system happens to detect an
internal error, it should crash -- all of the ones I use do.

> Ideally it should discontinue the process that caused the
> problem whilst continuing with everything else. Operating
> systems that fail to meet this criterion are not normally
> considered to be satisfactory (though most of us have had to
> endure using such systems.)

But that's irrelevant to the question at hand. The operating
system validates all parameters to system calls, and terminates
the process if any of them are wrong. The operating system also
does some internal consistency checks... and crashes if they
fail. And we're talking here about internal consistency checks
failing, not about some "external" user passing a bad parameter.
(Although appropriate for processes, I do believe that
terminating a human user because he passed a bad parameter would
be a bit extreme. Although I have heard of at least one program
that did just that.)

--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

James Kanze

unread,
Dec 10, 2005, 4:35:01 PM12/10/05
to
Bob Bell wrote:
> Walter Bright wrote:

>>"Branimir Maksimovic" <bm...@hotmail.com> wrote in message
>>news:dnck3n$n0c$1...@domitilla.aioe.org...

>>>I completelly agree. It's just I've never have a case that
>>>out of bounds error is something else but fatal error.

>>Consider this:

>>try
>>{
>> for (i = 0; 1; i++)
>> array[i] = ...;
>>}
>>catch (ArrayBoundsException)
>>{
>>}

>>If array is a large array, that version of the loop will be significantly
>>faster than:

>> for (i = 0; i < array.length; i++)
>> array[i] = ...;

>>because in the latter the array bounds check gets done twice
>>per iteration, rather than once. The redundant check can
>>sometimes be eliminated by a more advanced data flow analysis
>>optimizer, but not always.

> I doubt it would be signficantly faster unless "..." is trivial.

I doubt it would be faster at all unless the compiler is
particularly stupid. Any decent compiler should hoist array
bounds checking out of the loop.

> This seems more like an abuse of exception handling than an
> argument in favor of checked array indices. In any case, I
> would think the "safer is more important than faster" criteria
> would prefer the second alternative over the first. It may be
> slower, but it is easier to read and understand as correct.
> The first one requires more thinking, and all it takes is a
> spurious out of bounds array index from the "..." (which could
> be caused by a bug) to ruin the correctness.

> I also don't recall ever having an out of bounds index that
> wasn't caused by a bug.

Well, one could argue that this is because you program in C/C++,
where it is a bug:-). If an array bounds error had defined
behavior, you could presumably write code which depended on it.

My personal feeling is that this would be a good trick for
obfuscation, and not much else. But of course, that's not based
on concrete experience.

--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

Walter Bright

unread,
Dec 10, 2005, 8:12:05 PM12/10/05
to

"Seungbeom Kim" <musi...@bawi.org> wrote in message
news:dndmqj$epi$1...@news.Stanford.EDU...

> Walter Bright wrote:
> > I agree, this is the sensible approach. Too often in C++, the
> > straightforward intuitive way to do something is the least safe and most
> > deprecated way. (Virtualness of destructors comes to mind.)
>
> I don't disagree, but let me put it this way: it is a conflict between
> having orthogonal features vs having them in the most commonly used way.
>
> No member function is virtual unless explicitly declared so or inherited
> from a virtual one, and it's just that a destructor is no exception. A
> function's being virtual and its being a destructor are orthogonal and
> do not affect each other. It makes the language neater and easier to
> explain and understand, as there's one less (special) rule to remember.
> On the other hand, in practice we often need the destructor to be
> virtual if any member function is virtual; hence the "guideline" such as
> "Declare your destructor virtual if any member function is virtual." -
> one more to remember.

It's possible to look at rules from the other direction: all member
functions are virtual (including destructors) unless specifically marked
otherwise (such as with a 'final' attribute). A further refinement of this
idea is that a derived class cannot override a 'final' member function in a
base class, and a final attribute for the whole class makes all its member
functions final.

I believe this solution is superior because the program will still work
correctly even if all the 'final's are omitted. Adding in the 'final's would
be an optimization. This fits in neatly with the idea that the
straightforward, intuitive way is the correct way, and that the more
advanced methods are available for the more advanced programmer who wishes
to tune the code. Furthermore, some measure of safety against misuse is
there, as one cannot override a non-virtual method.

BTW, this is how virtualness is done in the D programming language.

Walter Bright
www.digitalmars.com C, C++, D programming language compilers

Torsten Robitzki

unread,
Dec 10, 2005, 8:13:36 PM12/10/05
to
gerg wrote:
> i beleive that a C++ which used the exception behaviour of Java would
> be great for the language. I the example mentioned above the ignorant

> programmer would not be able to compile his code using the [] operator
> unless he caught a "invalid access" exception.

So one example of summing up a vector of integers could/should look like
this?:

int sum = 0;

try {
for (std::vector<int>::size_t i = 0; i != v.size(); ++i)
sum += v[i];
} catch ( const std::invalid_access& )
{
std::cerr << "ups." << std::endl;
}

ending with 2 * v.size() bound checks plus a pointless exception handler.

Fortunately there are still iterators and algorithms.

regards
Torsten

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:14:36 PM12/10/05
to
Bob Bell wrote:
> I also don't recall ever having an out of bounds index that wasn't
> caused by a bug.

Honest, me neither. I do recall, however, having spent lots of time on
soft errors caused by out of bounds access, versus finding the problem
with extreme ease when bounds checking was in vigor. How about you? How
about others?

Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:11:17 PM12/10/05
to
Francis Glassborow wrote:
> In article <Ir8y4...@beaver.cs.washington.edu>, "Andrei Alexandrescu
> (See Website For Email)" <SeeWebsit...@moderncppdesign.com> writes
>
>>>I meant to reverse the semantics i.e.:
>>>1) operator[] checks the index and throws an out_of_range exception (as at()
>>>is currently doing).
>>>2) at() isn't checking anything and maybe crashes (as operator[] is
>>>currently doing).
>>
>>I think that's a more sensible choice than tha status quo. I also think
>>the suggestion will not make it into the standard :o).
>
>
> You can absolutely bet on that. Unfortunately as one of the functions is
> an operator we do not even have the option of providing an extra
> parameter that defaults to the current status. In simple terms we will
> have to improve our education of C++ programmers by warning them that
> STL implementations of operator[] have the same problem that operator[]
> has for raw arrays.
>
> Now that last sentence may explain why we have the current requirements,
> in general user defined operators should behave like built in ones.

They would behave the same, except when the built in operator[] would
not behave in any defined way, in which case it's fine to do whatever -
including some defined behavior. "Undefined" includes "something in
particular" :o).

Andrei

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:14:58 PM12/10/05
to
Seungbeom Kim wrote:
> Walter Bright wrote:
>
>>"Andrei Alexandrescu (See Website For Email)"
>><SeeWebsit...@moderncppdesign.com> wrote in message
>>news:Ir8vF...@beaver.cs.washington.edu...
>>
>>
>>>The problem I have with the unchecked operator[] is that it is the most
>>>convenient to use, and therefore the most used. The default should be
>>>safety because most of the time we want safety. Only rarely we need
>>>absolute speed, and therefore the unsafe speed should be achievable only
>>>with extra syntax.
>>
>>I agree, this is the sensible approach. Too often in C++, the
>>straightforward intuitive way to do something is the least safe and most
>>deprecated way. (Virtualness of destructors comes to mind.)
>
>
> I don't disagree, but let me put it this way: it is a conflict between
> having orthogonal features vs having them in the most commonly used way.

Maybe sometimes, but there are too many examples where orthogonality has
nothing to do with it. E.g., all variables should be initialized by
default, unless the programmer asks they are not.

Andrei

Walter Bright

unread,
Dec 10, 2005, 8:10:56 PM12/10/05
to

"Felipe Magno de Almeida" <felipe.m...@gmail.com> wrote in message
news:1134206624.3...@f14g2000cwb.googlegroups.com...

> Non-virtual destructors, IMO, has a strong meaning. Which helps to
> self-document code.
> When one sees a class with a virtual destructor it knows it is using
> polymorphism or alike with this,
> but one sees a class without any virtual functions, he must really
> think if inheriting from it is really
> a good idea. IMHO, it really helps designing software in C++.

The problem with that approach is that a non-virtual destructor *implies*
one is not meant to derive from the class, but it is implication only. For a
code reviewer, a language guarantee of such would be preferable, if that is
indeed the intent of the original programmer rather than a (unfortunately
common) mistake.

In the D programming language, this idea is approached from the opposite
direction. Declaring a class to be 'final' means that one cannot derive from
it, and all its members become non-virtual. It's less likely that one types
in 'final' by accident than one forgetting to add 'virtual'. Furthermore,
since the language offers a guarantee that a 'final' class cannot be derived
from, I view that as more reliably self-documenting than an implication by
omission.

Walter Bright
www.digitalmars.com C, C++, D programming language compilers

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter Bright

unread,
Dec 10, 2005, 8:13:15 PM12/10/05
to

"Niklas Matthies" <usenet...@nmhq.net> wrote in message
news:slrndplbsc.2np...@nmhq.net...

> It's fundamentally wrong to (have to) write such code.

True, but I don't see that anyone said one must write such.

> Programming languages should cater to the programmer, not the other way
round.

True as well, as that is the whole point of having a programming language,
but often there is no consensus of what that means!

> Also, this has the usual problem of selective exception catching:
> Some other code executed within the loop (i.e. within the "..." or
> the assignment operator) could also happen to access arrays (if it
> doesn't now, some future program modifications could create such a
> situation), and (due to some bug) also throw an ArrayBoundsException,
> and you don't want to catch those here.

You're quite right about that.

Andrei Alexandrescu (See Website For Email)

unread,
Dec 10, 2005, 8:15:19 PM12/10/05
to
Dietmar Kuehl wrote:
> On the other hand, I would also provide checked containers to
> relative novices, also using 'operator[]()' but a range checked one.
> IMO, creating a custom container isn't a big deal and if I have
> specific requirements that is exactly what I just do. I would still
> appreciate the freedom for the library implementer to choose which
> [default] policy suits the targeted user base best.

How about the standard library? Should it just leave operator[] with
undefined behavior and wash its hands?

> Do modular exceptions do stack unwinding like normal exceptions do?
> Both possible answers cause situations increasing the problem: if
> they don't do normal stack unwinding, they cannot restore partially
> changed data structures. If they do normal stack unwinding they run
> the risk of corrupting the program's state even more.

I don't know. I do think, however, that the risk is low. Destructors
tend to clean modular data structures, and less often affect global
state. I know you can come with examples to the contrary, and then I can
come with examples to the contrary of the contrary. Absent hard data,
we'll reach a diplomatic impasse :o).

>>How about defining exceptions such that they cannot be handled but in a
>>restricted set of modules? They should pass right through the other
>>modules without any chance of being caught, and make it to a few
>>carefully designed modules that handle them.
>
>
> Handle them to what end? To continue with corrupted program state?
> I doubt that you can come up with carefully designed modules which
> recover from yet unknown programming errors (if the programming
> errors were known, it would be easier to fix them than to create a
> module recovering from them).

That's not true. I know of systems that must continue execution with a
known error for various reasons. For example, the fix + deployment takes
a long time, is impractical, or impossible (embedded applications). Or
because the application uses multiple, relatively independent threads -
a corrupt thread does not necessarily corrupt the state of the entire
application.


Andrei

It is loading more messages.
0 new messages