Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is THINK C 6.0 really that bad?

0 views
Skip to first unread message

Ron_Hu...@bmug.org

unread,
Nov 9, 1993, 1:44:58 PM11/9/93
to
To: Pete Gontier,gur...@netcom.com,UUCP

PG> in short, what I said was that a slow, incremental evolution *toward*
PG> C++ would have *much* better for the developer community than for
PG> Symantec to release a massively buggy C++ compiler which scarcely
PG> works at all but can claim to be real C++. Would have been better for
PG> Symantec, too, because with one fell swoop they have ruined this
PG> product's credibility forever.

I have this natural distrust of "slow, incremental evolution". It's
too much like trying to leap a chasm in two easy jumps.

I agree that releasing a massively buggy compiler was a foolish thing
for Symantec to do. I hope someone at Symantec learned a lesson from
this.

We seem to disagree about what is and isn't semantic sugar, but I don't
like using a language that is only half-implemented. You wind up
coding around the missing pieces, and then when the language is finished,
it's too much work to go back and do it right. Especially with
exceptions, since the workaround distorts the very structure of your
code.

PG> RH> But exception handling is important.
PG>
PG> Yes. It's important to make sure not to use it! :-) Exceptions are
PG> the institutionalized introduction of FORTRAN into C++. They fix two
PG> things which are broken in C++:
PG>
PG> 1. constructors.
PG> 2. the notion that classes are first-class data-types.

Huh? Important not to handle exceptions? What do you use, prayer? :-)
But seriously, I don't see how exceptions would introduce FORTRAN into
C++. Quite the contrary. One of the big problems with FORTRAN is that
you have to just keep plodding along working with low-level data types
and low-level operations, because FORTRAN doesn't have high-level data
types, and its only high-level operation is procedure call.

In C++ without exceptions, you are forced to code the same way: do a
little bit of work; check for errors; do a tiny little bit more; check
again; keep on plodding. Worse yet, when you encounter an error, you
have to handle it right away, or pass an error code back to your caller,
who has to check again for an error that you have already detected. You
wind up doing more checking than work.

With exceptions, you can say: Do all of this, and if anything goes wrong
anywhere, here's what I want you do. The error handling is all in one
place, instead of being scattered around.

More importantly, the error can be detected where it happens, and then
handled where you know what to do about it. These are not generally the
same place.

PG> RH> The only workaround is to write constructors that cannot fail.
PG> RH> But that requires in most cases that the constructor be trivial,
PG> RH> (in particular, it cannot allocate memory, because that can fail)
PG>
PG> That's what I do.
PG>
PG> RH> with the real initialization being done by a separate user-written
PG> RH> initialization routine that the user has to remember to call after
PG> RH> the constructor finishes.
PG>
PG> Well, sort of. You can write a separate intialializer member function
PG> and have all other other member functions call it before attempting
PG> other work. This relieves the client programmer from doing two-step
PG> initialization, but granted, it is clumsier than being able to return
PG> an error code from a constructor. It does work, and it works pretty
PG> well.

I don't like this, from an efficiency standpoint. While I will agree
whole-heartedly that correctness is more important than efficiency, I
don't like to throw efficiency out the window either. Many member
functions will be small, and the extra initializing code may dominate
the run time. There has to be a better way.

And it doesn't work, anyway. Making a constructor not fail is a more
subtle problem than that. With your method, you not only have to make
sure the constructor doesn't fail directly, but that it doesn't call
any member funtions of its sub-objects, because they may fail when they
try to initialize themselves. And even if you're careful within the
constuctor, how do you handle an initialization error later? By the
time the error is detected, you're off doing something else. Seems to
me like you haven't gotten away from the need for exception handlers.

BTW, I don't want constructors to RETURN an error code. I don't think
error codes are the right way to handle errors. I want the constructor
(or any other code that finds an error) to THROW the error.

PG> RH> I don't call that sugar-coating.
PG>
PG> Neither do I.

Glad to hear it.

-Ron Hunsinger

Pete Gontier

unread,
Nov 10, 1993, 11:52:57 PM11/10/93
to
Ron_Hu...@bmug.org writes:

>Huh? Important not to handle exceptions? What do you use, prayer? :-)

I use discipline. I know you can't assume that everyone has it. I
know I would never expect anybody else to produce disciplined code.
Well, one or two programmers I have met could produce it. Maybe you,
too, but I haven't seen your work. :-)

>But seriously, I don't see how exceptions would introduce FORTRAN into
>C++. Quite the contrary. One of the big problems with FORTRAN is that
>you have to just keep plodding along working with low-level data types
>and low-level operations, because FORTRAN doesn't have high-level data
>types, and its only high-level operation is procedure call.

These are apples and oranges. Definitely C++ gives you "better" data
types than FORTRAN. But data types != exceptions. Exceptions give
C++ a big fat GOTO statement. Sounds like FORTRAN to me...

> In C++ without exceptions, you are forced to ... do a little bit of


> work; check for errors; do a tiny little bit more; check again; keep
> on plodding. Worse yet, when you encounter an error, you have to
> handle it right away,

I do recovery immediately and reporting at the top of the call tree
(on the Mac, in the main event loop).

> or pass an error code back to your caller, who
> has to check again for an error that you have already detected. You
> wind up doing more checking than work.

I am unclear why people are concerned about this. I do wish C++ had
a function modifier which *forced* callers to pay attention to return
values (error codes). But I trust myself not to ignore them.

If your concern is convenience, and you're my boss, I quit. :-) More
seriously, I've done this.

If your concern is performance, that doesn't exactly fly, either,
because most programs are i/o and memory-allocator bound and those
that aren't go off and do long computations which for the most part
can't generate errors.

>With exceptions, you can say: Do all of this, and if anything goes wrong
>anywhere, here's what I want you do. The error handling is all in one
>place, instead of being scattered around.

Yeah, but in *which* place? It's impossible to conclusively tell
where a thrown exception goes without a grep. Unless you like to
memorize these things.

Besides, I don't see where you get the idea exceptions are handled in
any fewer places than error codes. Are you saying your strategy is to
catch all exceptions in one place? Shades of ON ERR GOSUB... But I
don't think this is what you mean, which is exactly my point --
exception catching isn't done in any fewer places than error code
"parsing".

>More importantly, the error can be detected where it happens, and then
>handled where you know what to do about it. These are not generally the
>same place.

Nothing stopping you from doing the same with error codes. Bonus:
you get to see the recovery logic *right in the source code*. No
jumps to some other place you can't see at the moment. High tech,
eh?

>PG> RH> The only workaround is to write constructors that cannot fail.
>PG> RH> But that requires in most cases that the constructor be trivial,
>PG> RH> (in particular, it cannot allocate memory, because that can fail)
>PG>
>PG> That's what I do.
>PG>
>PG> RH> with the real initialization being done by a separate user-written
>PG> RH> initialization routine that the user has to remember to call after
>PG> RH> the constructor finishes.
>PG>
>PG> Well, sort of. You can write a separate intialializer member function
>PG> and have all other other member functions call it before attempting
>PG> other work. This relieves the client programmer from doing two-step
>PG> initialization, but granted, it is clumsier than being able to return
>PG> an error code from a constructor. It does work, and it works pretty
>PG> well.

>I don't like this, from an efficiency standpoint. While I will agree
>whole-heartedly that correctness is more important than efficiency, I
>don't like to throw efficiency out the window either. Many member
>functions will be small, and the extra initializing code may dominate
>the run time. There has to be a better way.

People are always worried about performance. Bah! If you've got a
class where inline functions and small oft-called functions are a
significant win, odds are you're looking at a numeric class or
something similar which can't generate errors (maybe they blow up if
you try to divide by zero, but that happens with integers, too, and
nobody expects a C++ exception from that). Most other classes tend to
be i/o or memory-allocator bound. And of course, checking a flag to
see if the object has already been initialized doesn't take any time
anyway. You can even do the flag check inline, if you must, and call
a lower-level initializer if it's false.

And, of course, C++ exceptions add overhead to all code which uses
them (the new Borland compiler even has a command-line switch to turn
off the overhead if you don't use exceptions). And I'll bet that
overhead is more significant than a 16-bit test-and-branch after each
relevant function call.

>And it doesn't work, anyway. Making a constructor not fail is a more
>subtle problem than that. With your method, you not only have to make
>sure the constructor doesn't fail directly, but that it doesn't call
>any member funtions of its sub-objects, because they may fail when they
>try to initialize themselves.

Right. It never occurred to me to call sub-object member functions
during a constructor. With my mindset, you don't get the urge.
Constructors are not for doing significant work. I think of them like
a bootstrap ROM -- just barely enough code to make the environment
ready for someone else to do some real work.

>And even if you're careful within the
>constuctor, how do you handle an initialization error later? By the
>time the error is detected, you're off doing something else. Seems to
>me like you haven't gotten away from the need for exception handlers.

You get an error code you weren't explicitly expecting and back out.
Errors happen. I consider them a normal outcome. Other people seem to
think only a few things go wrong. Classic example: opening a file:
people check for errors and assume they're all file-not-found. No,
about 100 things could have gone wrong. If you assume they can, then
you aren't surprised when something you didn't expect does happen.
Like perhaps the failure of a late ("lazy"?) initialization.
--
Pete Gontier // EC Technology // gur...@netcom.com

"Reality is 50 million polygons per second." -- Alvy Ray Smith

Jon Wätte

unread,
Nov 11, 1993, 9:32:27 AM11/11/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>types than FORTRAN. But data types != exceptions. Exceptions give
>C++ a big fat GOTO statement. Sounds like FORTRAN to me...

C++ already has GOTO, and, used correctly, GOTO is a godsend.
Similarly, break and continue, are also gotos, with goto points
no more well defined than for exceptions.

Indeed, an argument could be made that exceptions are what GOTO
_should_ have been, i e they add a layer of discipline and control
to the dangerous setjmp() semantics...

>> In C++ without exceptions, you are forced to ... do a little bit of
>> work; check for errors; do a tiny little bit more; check again; keep
>> on plodding. Worse yet, when you encounter an error, you have to
>> handle it right away,

>I do recovery immediately and reporting at the top of the call tree
>(on the Mac, in the main event loop).

This does not work for many cases. For instance, you're scanning a
list of N objects, and you can't get a specific kind of information
about an object. Is only this object lacking or all objects? Does the
user want to see the whole list incomplete? Our case needs a dialog
that basically says "The property X of item Y couldn't be shown.
Press "Silent" to avoid alerts like this for the other items in the
container." ... with "Silent" "OK" and "Cancel"

There are even more interesting cases where you, say, overload
semantics of closing a window to also do some other operations, that
may fail "Overwrite existing XXX in the container you move to?" or
be cancelled and trigger more operations...

Without the TCL exception mechanism I would be in real problems.
As it is, I can get by, but C++ exceptions would be wonderful.

And, indeed, for utility functions like "CopyFork" I use something
like:

short err = noErr ;

if ( ! err ) {
err = GetEOF ( in , & end ) ;
}
if ( ! err ) {
err = SetEOF ( out , & end ) ;
}
if ( ! err .....

return err ;

Every tool has its proper field.

>> or pass an error code back to your caller, who
>> has to check again for an error that you have already detected. You
>> wind up doing more checking than work.

>I am unclear why people are concerned about this. I do wish C++ had
>a function modifier which *forced* callers to pay attention to return
>values (error codes). But I trust myself not to ignore them.

Because it clutters your code and algorithms, that's why. And 9 cases of
10, all the recovery you want to do is almost the same with something
added. In the copy file case, you really would want to do something like:

open_original ( ) ;
protect {
create_destination ( ) ;
protect {
open_destination ( ) ;
protect {
read_original ( ) ;
write_destination ( ) ;
} unwind {
close_destination ( ) ;
}
} unwind {
delete_destination ( ) ;
}
} unwind {
close_original ( ) ;
}

Ideally, there would be an easier syntax than this so you don't
get the LISP-like right slant...

Maybe:

copy_file ( ) {
open_original ( ) ;
prime { close_original ( ) ; }
create_destination ( ) ;
prime { delete_destination ( ) ; }
open_destination ( ) ;
prime { close_destination } ;
read_original ( ) ;
write_destination ( ) ;
}

(supposedly falling out of the context the prime{} was declared in
cancels that prime. The calls would call fire() if they fail)

>If your concern is convenience, and you're my boss, I quit. :-) More
>seriously, I've done this.

Well, "convenience" may mean many things. Being able to read the
code without a machete is nice nine months later when you go in
there to modify functionality.

>>With exceptions, you can say: Do all of this, and if anything goes wrong
>>anywhere, here's what I want you do. The error handling is all in one
>>place, instead of being scattered around.

>Yeah, but in *which* place? It's impossible to conclusively tell
>where a thrown exception goes without a grep. Unless you like to
>memorize these things.

Huh? That depends on how you declare your exceptions, and how you
declare your exception handlers.

>don't think this is what you mean, which is exactly my point --
>exception catching isn't done in any fewer places than error code
>"parsing".

Yes it is. What do you do, for instance, to separate Mac OS codes
from your OWN error codes (i e something was OK according to the OS
but not according to your algorithm)? You can't use positive codes
for this, since there are system calls which return positive error
codes as well (and SysErr...)

>People are always worried about performance. Bah! If you've got a
>class where inline functions and small oft-called functions are a
>significant win, odds are you're looking at a numeric class or
>something similar which can't generate errors (maybe they blow up if

So you would fly an airplane that has 3:1 odds of not crashing?
What about those, NOT TOO FEW cases where that class isn't a
compute-bound class? Or where the computation or mangling the
class does can fail in several ways that you need to recover from?

>off the overhead if you don't use exceptions). And I'll bet that
>overhead is more significant than a 16-bit test-and-branch after each
>relevant function call.

Maybe not. You forget the assignment-of-error-code-variable :-)
More seriously, the overhead for exceptions is only once per function
(or exception block)

Anyway, overhead isn't my big problem. Code readability and
reliability is.

--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --
"It is not the interfaces responsibility to give access to the application,
it is the applications responsibility to implement the interface."
-- Apple Direct (?)

Pete Gontier

unread,
Nov 11, 1993, 2:09:52 PM11/11/93
to
By the way, YES, SymC++ 6.0 really is "that bad". Just to clarify for
anyone who has tuned in late...

d88...@dront.nada.kth.se (Jon Wätte) writes:

>In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>types than FORTRAN. But data types != exceptions. Exceptions give
>>C++ a big fat GOTO statement. Sounds like FORTRAN to me...

>C++ already has GOTO, and, used correctly, GOTO is a godsend.
>Similarly, break and continue, are also gotos, with goto points
>no more well defined than for exceptions.

I can see that this discussion is already futile, but I will plod on
anyway. Exceptions are *worse* than the GOTO that's already there,
because they're non-local.

>Indeed, an argument could be made that exceptions are what GOTO
>_should_ have been, i e they add a layer of discipline and control
>to the dangerous setjmp() semantics...

Adding type-safety to a non-local GOTO is like scrambling to add air
bags to an airplane you're landing without a steering wheel.

>>> In C++ without exceptions, you are forced to ... do a little bit of
>>> work; check for errors; do a tiny little bit more; check again; keep
>>> on plodding. Worse yet, when you encounter an error, you have to
>>> handle it right away,

>>I do recovery immediately and reporting at the top of the call tree
>>(on the Mac, in the main event loop).

>This does not work for many cases. For instance, you're scanning a
>list of N objects, and you can't get a specific kind of information
>about an object. Is only this object lacking or all objects? Does the
>user want to see the whole list incomplete? Our case needs a dialog
>that basically says "The property X of item Y couldn't be shown.
>Press "Silent" to avoid alerts like this for the other items in the
>container." ... with "Silent" "OK" and "Cancel"

True, and sometimes I do make an "exception" (pun intended) for cases
like this. However, how do exceptions help you implement this? They
don't.

>There are even more interesting cases where you, say, overload
>semantics of closing a window to also do some other operations, that
>may fail "Overwrite existing XXX in the container you move to?" or
>be cancelled and trigger more operations...

And if the possibility that the user can cancel isn't acknowledged in
an explicit way, then that's a problem with the app, not with error
checking.

For example, to my oApp object you need to ask the question "Is it OK
to quit?" oApp passes the question on to the window list. The window
list passes the question on to each of the windows. If any of them
says 'no', the answer comes all the way back up to oApp and the
take-home message is: "No, you can't quit. Keep looping."

If you're hiding this sort of information exchange in your error
handling, you're doing yourself a significant disservice. *And* I
can't imagine how exception handling aids you in doing this
disservice to yourself, except that it might be more difficult to
figure out how to fix it later.

>>> or pass an error code back to your caller, who
>>> has to check again for an error that you have already detected. You
>>> wind up doing more checking than work.

>>I am unclear why people are concerned about this.

>Because it clutters your code and algorithms, that's why.

I guess *this* is what I really don't understand. This sounds
incorrigibly ivory-tower to me. I think of run-time errors as part of
the algorithm. They aren't cluttering anything, they're making the
flow-control obvious. I never envision writing code that looks like
algebra. I guess other folks do -- this probably has to do with math
being a prerequisite for computer science ... but none of this has
anything to do with exceptions. The bottom line question is: are you
willing to trade obfuscated flow control for a lack of clutter?

Not only this, but in the example you post later, we see that
exceptions don't help at all to eliminate clutter -- the way I read
the example, exceptions *add* clutter.

>And 9 cases of 10, all the recovery you want to do is almost the
>same with something added.

Actually, in 99 cases out of 100, recovery isn't an issue. An error
is a failure. But then again I'm the sort who doesn't create the file
just because I couldn't open it -- and I'm not the sort who creates
the file, ignoring errors, and then attempts to open the file -- I
check for the file's existence explicitly first and I attempt to
create it only if it's not there.

>In the copy file case, you really would want to do something like:

> open_original ( ) ;
> protect {
> create_destination ( ) ;
> protect {
> open_destination ( ) ;
> protect {
> read_original ( ) ;
> write_destination ( ) ;
> } unwind {
> close_destination ( ) ;
> }
> } unwind {
> delete_destination ( ) ;
> }
> } unwind {
> close_original ( ) ;
> }

The syntax is unfamiliar to me, but I get the feeling that even if it
weren't, I wouldn't see your point. I don't see any recovery here.
Nor do I see the need for recovery. My code would have looked almost
the same -- but with error code checking in place of the
protect/unwind sequences (and the error code checking would have
taken fewer lines).

Just for funsies, let's see what my version would have looked like.
I probably wouldn't choose this sequence of events, but then neither
would you if this weren't a hastily typed net example.

oExpn status, status2;

if (!(status = open_original ( )))
{
if (!(status = create_destination ( )))
{
if (!(status = open_destination ( )))
{
if (!(status = read_original ( )))
status = write_destination ( );

status2 = close_destination ( );
if (!status) status = status2;
}
if (status)
delete_destination ( );
}
status2 = close_original ( );
if (!status) status = status2;
}

return (status);


>Ideally, there would be an easier syntax than this so you don't
>get the LISP-like right slant...

Bias against right-slanted code is another thing I put in the box
with "goto, if used carefully..." Makes me despair that anyone even
says these things. Right-slant is beautiful. It's a form of
documentation. (And no, I've never hacked Lisp.) Wanting to make
it go away is...

>copy_file ( ) {
> open_original ( ) ;
> prime { close_original ( ) ; }
> create_destination ( ) ;
> prime { delete_destination ( ) ; }
> open_destination ( ) ;
> prime { close_destination } ;
> read_original ( ) ;
> write_destination ( ) ;
>}

...Ugh! You can't really want want this. You're pulling my leg.
The compiler is doing the right-slant for you behind your back
and you like it that way? No. I won't believe you.

>..."convenience" may mean many things. Being able to read the


>code without a machete is nice nine months later when you go in
>there to modify functionality.

Heh. I'd rather modify code where the slant is right in my face, just
like I'd rather modify code without non-local GOTOs jumping to places
that aren't right in my face. I could take this too far -- I could
demand to see 68000 at all times -- but that wouldn't be a logic
issue -- that would be a performance issue -- assuming the compiler
had no codegen bugs. (No cracks about SymC++ -- this is a diff thread
now.)

>>Yeah, but in *which* place? It's impossible to conclusively tell
>>where a thrown exception goes without a grep. Unless you like to
>>memorize these things.

>Huh? That depends on how you declare your exceptions, and how you
>declare your exception handlers.

Well, yes, of course, but if you are being conscientious, you end up
declaring exception blocks in every place you would have declared
error status vars anyway. And then you're left with grepping for the
place an exception will jump to if you throw one.

>>don't think this is what you mean, which is exactly my point --
>>exception catching isn't done in any fewer places than error code
>>"parsing".

>Yes it is. What do you do, for instance, to separate Mac OS codes
>from your OWN error codes (i e something was OK according to the OS
>but not according to your algorithm)? You can't use positive codes
>for this, since there are system calls which return positive error
>codes as well (and SysErr...)

This negative/positive thing has little to do with exceptions. It
has to do with types. What I do currently for error types has nothing
to do with 'OSErr's except at the lowest levels, where they are
translated into something else that works better in a cross-platform
context and has its own type. In fact, it's a class. In any case,
exceptions wouldn't have helped me one iota with this type business.
I do wonder what anybody else could have gotten out of exceptions
used with OSErr's, because of the broken way C++ handles C primitive
types -- fnfErr could well be converted implicitly to a float or to
some other wonderful obscure custom numeric type. Unless you're
planning to *cast* fnfErr every time you use it, which makes me even
more queasy.

(If you really want to, you can put positive values in an OSErr
without significant risk. There are a few positive OSErr codes, but
they're associated with really obscure corners of the toolbox and can
be special-cased with glue if necessary, not that that is type safe
or anything... I used to use a system which did this -- along with
establishing an ID for each module, which compensated for the fact that
the OSErr codes can be generated by more than one module.)

As for returning DS errors ("Sorry, the file could not be copied
because a bus error occurred")... I wouldn't expect a C++ exception
to succeed in unwinding the stack after such an error, and I don't
think I would want it to try.

>>People are always worried about performance. Bah! If you've got a
>>class where inline functions and small oft-called functions are a
>>significant win, odds are you're looking at a numeric class or
>>something similar which can't generate errors (maybe they blow up if

>So you would fly an airplane that has 3:1 odds of not crashing?

I'm not sure where you're going with this line. I don't think anybody
would write a class which could generate errors and then just drop
them on the floor. Assuming they weren't dropping errors on the floor
in general, which I have seen.

>What about those, NOT TOO FEW cases where that class isn't a
>compute-bound class? Or where the computation or mangling the
>class does can fail in several ways that you need to recover from?

Well, sure, I acknowledge the possibility, but let's have an
example. And let's see if that example constitutes a likely
performance bottleneck.

>>off the overhead if you don't use exceptions). And I'll bet that
>>overhead is more significant than a 16-bit test-and-branch after each
>>relevant function call.

>Maybe not. You forget the assignment-of-error-code-variable :-)

Assuming there are no run-time errors, that's a one-time 16-bit
assignment. Just like...

>More seriously, the overhead for exceptions is only once per function

Aha!

>(or exception block)

Hmmmm. The example you posted above had a *lot* of exception blocks,
didn't it? Granted, it was a fairly rare circumstance, and it was i/o
bound so who cares, but the point is that error codes are no worse,
given my example, in which we saw the code structure paralleled
yours. Probably less, in this case, because I initialize an error
context twice and you initialize an error context *many* times.

>Anyway, overhead isn't my big problem. Code readability and
>reliability is.

Me too. Which is why I don't like exceptions. :-)

Jon Wätte

unread,
Nov 12, 1993, 5:08:48 AM11/12/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>By the way, YES, SymC++ 6.0 really is "that bad". Just to clarify for
>anyone who has tuned in late...

And Think C 6.0 isn't, and the header said "Think C" :-)

>>C++ already has GOTO, and, used correctly, GOTO is a godsend.
>>Similarly, break and continue, are also gotos, with goto points
>>no more well defined than for exceptions.

>I can see that this discussion is already futile, but I will plod on
>anyway. Exceptions are *worse* than the GOTO that's already there,
>because they're non-local.

You still didn't comment upon "break" and "continue" which also
are gotos (as are, indeed, switch() statements and if() statements)

>>Indeed, an argument could be made that exceptions are what GOTO
>>_should_ have been, i e they add a layer of discipline and control
>>to the dangerous setjmp() semantics...

>Adding type-safety to a non-local GOTO is like scrambling to add air
>bags to an airplane you're landing without a steering wheel.

Says you. (Oh, my, wait! I'm not going to start name-calling :-)

>>>I do recovery immediately and reporting at the top of the call tree
>>>(on the Mac, in the main event loop).

>>This does not work for many cases. For instance, you're scanning a
>>list of N objects, and you can't get a specific kind of information
>>about an object. Is only this object lacking or all objects? Does the

>True, and sometimes I do make an "exception" (pun intended) for cases


>like this. However, how do exceptions help you implement this? They

For me, they do. It's a natural way of expressing things, and reads
very well.

>>may fail "Overwrite existing XXX in the container you move to?" or
>>be cancelled and trigger more operations...

>And if the possibility that the user can cancel isn't acknowledged in
>an explicit way, then that's a problem with the app, not with error
>checking.

Huh? The possibility the user can cancel is expressed very
eloquently using an exception, while it's ugly to do using
error codes.

Or do you explicitly define your own positive error codes in
the 20000+ range where Apple don't have as many result codes
defined? (though they're still reserved, of course)
Or do you always return a struct; one int for the error and
one for your internal code?

>For example, to my oApp object you need to ask the question "Is it OK
>to quit?" oApp passes the question on to the window list. The window

Yes; this is the way the TCL does it as well. Hoever, you can't always
pre-flight all messages, since that would essentially mean you would
have to perform the operation (which may take some cycles) and then
back out and say "yes I could do it" and then re-do it again - except
now something might have changed, so you will not have won anything
really.

>If you're hiding this sort of information exchange in your error
>handling, you're doing yourself a significant disservice. *And* I
>can't imagine how exception handling aids you in doing this
>disservice to yourself, except that it might be more difficult to
>figure out how to fix it later.

Well, preferences obviously differ. Hmm, wait, you're one of those
Think Pascal weirdos, right? :-)

>>Because it clutters your code and algorithms, that's why.

>I guess *this* is what I really don't understand. This sounds
>incorrigibly ivory-tower to me. I think of run-time errors as part of
>the algorithm. They aren't cluttering anything, they're making the

Yeah, that may work for you. I get the feeling we're doing
different kinds of things, or belong to different species or
something.

>anything to do with exceptions. The bottom line question is: are you
>willing to trade obfuscated flow control for a lack of clutter?

I say exceptions make the flow control CLEARER! It provides for
a conspicuous place where all errors will go (well marked in the
indentation) while still allowing you to grasp the structure of
the rest of the code.

The other way of doing it is:

err = XXXX


if ( ! err ) {

err = YYYY


}
if ( ! err ) {

err = ZZZZ
}
if ( err ) {
RECOVER


}
if ( ! err ) {

err = FFFF
}

Notice how the RECOVER stands out and is easily found? No? That's
because it doesn't and isn't.

<exception example, 17 clear lines>

>The syntax is unfamiliar to me, but I get the feeling that even if it
>weren't, I wouldn't see your point. I don't see any recovery here.

Huh? error-handling and recovery is somewhat intermixed terms;
the "recovery" consists of deleting the file if you created it
and couldn't fill it, and closing all opened files.

>Nor do I see the need for recovery. My code would have looked almost
>the same -- but with error code checking in place of the

Yes; it's a trivial example.

> oExpn status, status2;

One down already - scrolling back up to read or write varaible
declarations isn't needed with exceptions.

<19 line example deleted>

And you don't use {} around all if-s? I wouldn't hire you,
either :-) (more seriously, ALWAYS use a block after conditional
structures!)

>>Ideally, there would be an easier syntax than this so you don't
>>get the LISP-like right slant...

>Bias against right-slanted code is another thing I put in the box
>with "goto, if used carefully..." Makes me despair that anyone even
>says these things. Right-slant is beautiful. It's a form of
>documentation. (And no, I've never hacked Lisp.) Wanting to make

I disagree. I like indented code that makes control strucctures
easy to read, but if you have to group 15 system calls, you wander
off to the point where your code starts 60 positions in on the page.

>> open_original ( ) ;
>> prime { close_original ( ) ; }
>> create_destination ( ) ;
>> prime { delete_destination ( ) ; }

>...Ugh! You can't really want want this. You're pulling my leg.


>The compiler is doing the right-slant for you behind your back
>and you like it that way? No. I won't believe you.

No. I want this. It's easy to read and to understand. To me.
Your mileage may vary.

>has to do with types. What I do currently for error types has nothing
>to do with 'OSErr's except at the lowest levels, where they are
>translated into something else that works better in a cross-platform
>context and has its own type. In fact, it's a class. In any case,

Oh, so in fact your re-inventing the exception mechanism,
you're just coding it in there yourself instead of letting
the compiler do it for you? No, I would not want to do this
myself. I can't believe you do. (hmm, sounds familiar)

>As for returning DS errors ("Sorry, the file could not be copied
>because a bus error occurred")... I wouldn't expect a C++ exception

I would, in a protected-mode environment.

>>>class where inline functions and small oft-called functions are a
>>>significant win, odds are you're looking at a numeric class or
>>>something similar which can't generate errors (maybe they blow up if

>>So you would fly an airplane that has 3:1 odds of not crashing?

>I'm not sure where you're going with this line. I don't think anybody

"ODDS ARE you're looking at a numeric class... can't generate
errors"

Well, to spell it out clearly: OFTEN it's NOT a numeric class and
it CAN generate errors and it's NOT memory- nor file bound. To me.
In my situation.

>>Anyway, overhead isn't my big problem. Code readability and
>>reliability is.

>Me too. Which is why I don't like exceptions. :-)

Well, after all, you're one of those strange emigrated barbarians... :-)

Oh, and that thing about checking wether the file is there before
creating it; it doesn't work in several cases:

1) If you want to create a resource fork, you can't simply check
if the FILE is there

2) In drop folders, you might not be able to get file info on
something, but you might still be able to create something that is
not there.

Cheers,

/ h+


--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

There's no sex act that can't be made better with Jell-O.

John Werner

unread,
Nov 12, 1993, 2:34:16 PM11/12/93
to
In article <gurgleCG...@netcom.com>, gur...@netcom.com (Pete Gontier)
wrote:

> Not only this, but in the example you post later, we see that
> exceptions don't help at all to eliminate clutter -- the way I read
> the example, exceptions *add* clutter.

That was an intentionally convoluted example, however. In normal
situations, I think exceptions do make code cleaner, if they're used
consistently.

Say you have a routine that calls three other functions that can all return
error codes. The code would look something like this in C++.

OSErr bigfunc()
{
OSErr err = noErr;

err = func1();

if (err == noErr) {
err = func2();
}
if (err == noErr) {
err = func3();
}

if (err != noErr) {
/* RECOVER */
} else {
/* SUCCESS */
}
return err;
}

Assuming that all 3 of the called functions use exceptions instead of error
codes, the version with exceptions would look like this:

void bigfunc()
{
try {
func1();
func2();
func3();
/* SUCCESS */
}
catch {
/* RECOVER */
}
}

If there's no RECOVER part, you can leave out the try/catch altogether, and
just do this:

void bigfunc()
{
func1();
func2();
func3();
/* SUCCESS */
}

Looks cleaner to me. :-)

When they're used right, I think exceptions simplify the local control flow
within a given function. The price is that they complicate the global
control flow between functions, but they do it in a predictable way. I
think it's usually a good trade.

--
John Werner wer...@soe.berkeley.edu
UC Berkeley School of Education 510-596-5868 work, 655-6188 home

Pete Gontier

unread,
Nov 12, 1993, 1:24:56 PM11/12/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>You still didn't comment upon "break" and "continue" which also
>are gotos (as are, indeed, switch() statements and if() statements)

"Still" didn't? Nobody asked me! :-) I have heard this argument
before, but I don't buy it. A GOTO can go anywhere. The rest of the
statements you mentioned are very nicely limited by syntax. If I see
a break, I know the control flow drops to the bottom of the loop. If
I see a continue, I know the control flow jumps to the top of the
loop. Etc. etc. However, if I see a GOTO, all I can do is scan the
entire function for a label, and then I often can't make heads or
tails of why it happened, because there are no syntactic clues.
(Except maybe the comments, which can never be trusted anyway.)
Exceptions are both better and worse in this regard. Mostly worse.
:-)

>...The possibility the user can cancel is expressed very


>eloquently using an exception, while it's ugly to do using
>error codes.

Well, we did deal with this issue a little better in the next few
paragraphs, but in case any reader is confused, I was *not*
suggesting that anyone deal with user cancellation using an error
code. That's just as bad as dealing with user cancellation using
exceptions (except that exceptions are more confusing in general, heh
:-)

>Or do you always return a struct; one int for the error and
>one for your internal code?

Actually, in point of fact, that's what I do. A 16-bit struct. I'm
still evaluating whether this is a good thing; I converted from using
an unsigned short with bit-masking, which I find pretty yucky these
days. oExpn seems to work pretty well, although I can't figure
out which operator to overload to let me do:

oExpn status = SomethingThatMightFail ( );
if (status) /* recover */ ;

For now I do something really ugly:

if (!!status) /* recover */ ;

Fortunately '!!' doesn't appear in any other context in my code, so
when I figure out what to do about it, I can go fix it. Any
suggestions? I guess I could write an inline oExpn::Test. :-(

>>For example, to my oApp object you need to ask the question "Is it OK
>>to quit?" oApp passes the question on to the window list. The window

>Yes; this is the way the TCL does it as well. Hoever, you can't always
>pre-flight all messages, since that would essentially mean you would
>have to perform the operation (which may take some cycles) and then
>back out and say "yes I could do it" and then re-do it again - except
>now something might have changed, so you will not have won anything
>really.

Of course, but there are *so few* operations which need pre-flighting
this way. This business of asking the user whether to save the
closing document is the only one I can think of off the top of my
head. It's been a while since I wrote app-level code, so maybe I'm
just forgetting. I'd love to hear more examples.

>>anything to do with exceptions. The bottom line question is: are you
>>willing to trade obfuscated flow control for a lack of clutter?

>The other way of doing it is:

> err = XXXX
> if ( ! err ) {
> err = YYYY
> }
> if ( ! err ) {
> err = ZZZZ
> }
> if ( err ) {
> RECOVER
> }
> if ( ! err ) {
> err = FFFF
> }

>Notice how the RECOVER stands out and is easily found? No? That's
>because it doesn't and isn't.

I agree that this code sample sucks. But I don't write code that way,
as you saw from my previous post.

><exception example, 17 "clear" lines>

Quotes being mine, of course. Allow me to take this opportunity to
point out that the problem with exceptions is not the code you see,
it's the code you *don't* see. Sure, you can write superficially
"clear" code using exceptions. The problem is where do the exceptions
get thrown? Where do the thrown exceptions get caught? None of that
is visible. With error codes there is a nice clear visible linkage
between all the parts of the code which are concerned with errors.
Thrown exceptions have a tendency to leap over gulfs of code in
inobvious ways. I think of the program at the source level as a sort
of intelligent entity telling the story of how it runs; exceptions
are like an attack on its mind's language center that causes it to
stutter and skip parts of the story, to the confusion of its
listeners (that's you and me, the programmers). That's what I'm
talking about when I speak of obfuscated control flow.

>>The syntax is unfamiliar to me, but I get the feeling that even if it
>>weren't, I wouldn't see your point. I don't see any recovery here.

>Huh? error-handling and recovery is somewhat intermixed terms;
>the "recovery" consists of deleting the file if you created it
>and couldn't fill it, and closing all opened files.

Well, perhaps you're talking about the difference between recovery
and error-recovery. To me, you always close the files, whether you
got an error or not. That's recovery, I guess. Deleting the file
because you couldn't copy it would then be error-recovery. The
problem with your example, then, was that it was mixing the two
things. The file closures were inside exception syntax. Maybe you
should produce another version.

>>Nor do I see the need for recovery. My code would have looked almost
>>the same -- but with error code checking in place of the

>Yes; it's a trivial example.

I don't think it was. It was a trivial algorithm, but it was perfect
for illustrating error-handling. It's the way I write code all the
time. I break up operations solely so that they can return a
discrete error status. That code looked like all of my code. I find
that I do write more functions than most folks, but I don't find that
discouraging, except in the sense that I don't respect most folks'
work, and that *is* discouraging, in a way.

>> oExpn status, status2;

>One down already - scrolling back up to read or write varaible
>declarations isn't needed with exceptions.

But now you're bringing in convenience issues, and I scoff at those.
At least these little oExpn guys are in a predictable place, not off
in some other function somewhere you can't see.

><19 line example deleted>

Well, I did tell you my indentation doesn't look like that. Far fewer
curly-brace-only lines in my code, for the most part. About the same
as in your original example.

>>>Ideally, there would be an easier syntax than this so you don't
>>>get the LISP-like right slant...

>>Bias against right-slanted code is another thing I put in the box
>>with "goto, if used carefully..." Makes me despair that anyone even
>>says these things. Right-slant is beautiful. It's a form of
>>documentation. (And no, I've never hacked Lisp.) Wanting to make

>I disagree. I like indented code that makes control strucctures
>easy to read, but if you have to group 15 system calls, you wander
>off to the point where your code starts 60 positions in on the page.

And that's when I start to think maybe I am writing a
badly-thought-out function and maybe I ought to think about ways to
break the thing up. It always turns out that there is a better way.

>>has to do with types. What I do currently for error types has nothing
>>to do with 'OSErr's except at the lowest levels, where they are
>>translated into something else that works better in a cross-platform
>>context and has its own type. In fact, it's a class. In any case,

>Oh, so in fact your re-inventing the exception mechanism,
>you're just coding it in there yourself instead of letting
>the compiler do it for you? No, I would not want to do this
>myself. I can't believe you do. (hmm, sounds familiar)

Well, I have to anyway, even if I am using exceptions. A thrown
exception has to pass *soething* back to tell why it threw itself.
An OSErr just isn't going to cut it in a cross-platform millieu,
whether you are using exceptions or error codes.

>Well, to spell it out clearly: OFTEN it's NOT a numeric class and
>it CAN generate errors and it's NOT memory- nor file bound. To me.
>In my situation.

And are those classes getting singificant wins out of inlines and
small functions? Would they really be crippled if each of those did
an inline 16-bit comparison against 0? Any more crippled than with
exception overhead?

This is like arguing against virtual member function calls because
they involve a table lookup rather than a simple JSR. In other words,
both you and I know it really doesn't make that much of a difference
in most cases and you can fix it for the cases in which it does.

================ side issues start here =================

>Hmm, wait, you're one of those Think Pascal weirdos, right? :-)

Actually, those THINK Pascal weirdos have exceptions, too. Best
exceptions I've seen implemented on the Mac. But I'm not one of those
Pascal weirdos. There is somebody on the net crusading against C++ in
favor of Object Pascal (or some descendant thereof), but I hesitate
to names names for fear of getting it wrong.

>(more seriously, ALWAYS use a block after conditional structures!)

Oh, people are always saying that because they are forced to work
with stupid programmers who only claim they know C-like syntax. I
refuse to compromise indentation just because most of the rest of the
world is incompetent. I *never* have a problem reading conditionals
without { }. Sometimes I'll even put the predicate on the same line!
Horrors!

>>As for returning DS errors ("Sorry, the file could not be copied
>>because a bus error occurred")... I wouldn't expect a C++ exception

>I would, in a protected-mode environment.

Good point. Very good point. Shows you what an unprotected MacOS all
these years has done to stunt my thinking.

>Oh, and that thing about checking wether the file is there before
>creating it; it doesn't work in several cases:

>1) If you want to create a resource fork, you can't simply check
>if the FILE is there

Well, yes you can. You get info on the file you're about to create
and see if its resource fork logical length is > 0. You put that
sequence into a function called ResourceFileExists. And you write a
companion called DataFileExists. ResourceFileExists even gives you a
chance to pre-flight opening the fork to see if the file really is a
resource file or if instead it is damaged or some goofus has used the
fork for something other than resources. Norstad has plenty of horror
stories to tell about that.

But, more to the point, you're still creating resource forks? :-)
There are some things for which I adopt a least-common-denominator
approach, like writing the Resource Manager to run under Windows.
Imagine juggling all that far/near crap in the resource map.
<shudder>

>2) In drop folders, you might not be able to get file info on
>something, but you might still be able to create something that is
>not there.

Ah, true. Very interesting thought. Very special-case, though,
because Standard File won't browse the location. Your program would
have to pretty much go find such a folder itself. Not common, but not
extremely rare, either. Warrants further thought.

>There's no sex act that can't be made better with Jell-O.

I thought of a few but I decided most people wouldn't think of them
as sex acts. Oh well.

Pete Gontier

unread,
Nov 12, 1993, 6:47:29 PM11/12/93
to
wer...@soe.berkeley.edu (John Werner) writes:

>Say you have a routine that calls three other functions that can all return
>error codes. The code would look something like this in C++.

>OSErr bigfunc()
>{
> OSErr err = noErr;

> err = func1();

> if (err == noErr) {
> err = func2();
> }
> if (err == noErr) {
> err = func3();
> }

> if (err != noErr) {
> /* RECOVER */
> } else {
> /* SUCCESS */
> }
> return err;
>}

I agree this example is horrible and difficult to read. But I don't
know why anybody would write that when they could write:

OSErr BigFunc (void)
{
OSErr oe = noErr;

if (!(oe = func1 ( )))
{
if (!(oe = func2 ( )))
{
if (!(oe = func3 ( )))
{
// success;
// 'return (noErr)' here if you want
// to skip 'if (oe)' below and just
// do recovery unconditionally; I
// choose not to.
}
if (oe) ; // recoverFromFunc2
}
if (oe) ; // recoverFromFunc1
}

return (oe);
}

This code documents its flow-control with its indentation. You don't
need comments in big capital letters "SUCCESS" because the shape of
the code leads your eye straight to the success and failure.

In the previously posted example, this same flow control was there,
but it was hidden by the fact that it assumed errors would not happen
and was coded as if they wouldn't. It ended up with this same deeply
nested error contingency path which denied its own depth because it
would rather pretend it was an ivory-tower straight-line execution
right down the page. I don't even start thinking code works that
way. Runtime errors are the first thing I think about.

Jon Wätte

unread,
Nov 12, 1993, 9:43:04 PM11/12/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>> if (err == noErr) {
>> err = func3();
>> }

>I agree this example is horrible and difficult to read. But I don't


>know why anybody would write that when they could write:

> if (!(oe = func1 ( )))
> {
> if (!(oe = func2 ( )))
> {

This, if anything, is horrible and ugly to read. I'm no big fan
of assignments in if()s, and even less of function calls in if()s.

>This code documents its flow-control with its indentation. You don't
>need comments in big capital letters "SUCCESS" because the shape of
>the code leads your eye straight to the success and failure.

Uh, I think those SUCCESS and RECOVER comments were placeholders
for the actuall success and recover code...

--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

Cookie Jar: Vanilla Yoghurt with Crushed Oreos.

Jon Wätte

unread,
Nov 12, 1993, 10:03:22 PM11/12/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

> oExpn status = SomethingThatMightFail ( );
> if (status) /* recover */ ;

And, in fact, you're re-implementing the exception mechanism
all by yourself.

I'm sure you have your own strcpy() and memset as well!

>>Yes; this is the way the TCL does it as well. Hoever, you can't always
>>pre-flight all messages, since that would essentially mean you would
>>have to perform the operation (which may take some cycles) and then

>Of course, but there are *so few* operations which need pre-flighting


>this way. This business of asking the user whether to save the

In your world, my friend. My world (definately no ivory tower
anymore :-) consists of concatenated utility functions making
operation methods, and concatenated methods making up user
actions.

An example from the spec I got (I wrote the original, and the
client added stuff that "didn't look to hard" - yeah, right)

"When having an article open and hitting Send, the same Send
dialog box as in the article-list Send should be shown, after
which the article is saved and closed, after which the article
is sent to the selected container."

The implicated operations are of course testing for a valid
send destination, for overwrite at send, for updating the
display of the receiving and sending container, for locking and
unlocking the collaborative article at the right times, and for
the user to cancel when sending or saving or considering over-
writing. I'm sure I forget something.

I use exceptions for global communications, and it works for me.
You use error classes for global communications, and it works
for you. Never the twain shall meet.

>closing document is the only one I can think of off the top of my
>head. It's been a while since I wrote app-level code, so maybe I'm
>just forgetting. I'd love to hear more examples.

I could give you 20, except then I would violate the non-announcement
clause of my contract ;-)

>>Notice how the RECOVER stands out and is easily found? No? That's
>>because it doesn't and isn't.

>I agree that this code sample sucks. But I don't write code that way,
>as you saw from my previous post.

Yes, and that code was so unbearably ugly and hard to read I
almost threw up :-) You hid the work functions in if(((===)))
clauses so they were hard to find and read.

>of intelligent entity telling the story of how it runs; exceptions
>are like an attack on its mind's language center that causes it to
>stutter and skip parts of the story, to the confusion of its
>listeners (that's you and me, the programmers). That's what I'm
>talking about when I speak of obfuscated control flow.

You're not gonna make it in the world of hypertext media,
that's for sure ;-)

>>I disagree. I like indented code that makes control strucctures
>>easy to read, but if you have to group 15 system calls, you wander
>>off to the point where your code starts 60 positions in on the page.

>And that's when I start to think maybe I am writing a
>badly-thought-out function and maybe I ought to think about ways to
>break the thing up. It always turns out that there is a better way.

Ever built a mildly complicated AppleEvent?
Probably not.

>>(more seriously, ALWAYS use a block after conditional structures!)

>Oh, people are always saying that because they are forced to work
>with stupid programmers who only claim they know C-like syntax. I

I know C. However, even I forget. A bug prevented is a bug dead.
Your hacker-whacker attitude is shining through!

>>>As for returning DS errors ("Sorry, the file could not be copied
>>>because a bus error occurred")... I wouldn't expect a C++ exception

>>I would, in a protected-mode environment.

>Good point. Very good point. Shows you what an unprotected MacOS all
>these years has done to stunt my thinking.

Well, I only started demanding it in 1989 or so. Then when I bought
Jasik's, I had already gotten an 040 machine. Oh, well.

>But, more to the point, you're still creating resource forks? :-)

Yeah, they tend to be useful for such niceties as data files
that identify their creating application, preference files that
do not lend themselves to opening, and other miscellaneous
user interface niceties.

>>2) In drop folders, you might not be able to get file info on
>>something, but you might still be able to create something that is
>>not there.

>Ah, true. Very interesting thought. Very special-case, though,
>because Standard File won't browse the location. Your program would

I live by special cases. As soon as I want ANYTHING done I have
to work with special cases. It might be the nature of my work area?
Or just that I, as I go thinking about these kinds of things, tend
to naturally group them into exceptions? :-)

(Actually, I use a mixture)

--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

Cookie Jar: Vanilla Yoghurt with Crushed Oreos.

Pete Gontier

unread,
Nov 13, 1993, 11:44:11 AM11/13/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>> if (err == noErr) {
>>> err = func3();
>>> }

>>I agree this example is horrible and difficult to read. But I don't
>>know why anybody would write that when they could write:

>> if (!(oe = func1 ( )))
>> {
>> if (!(oe = func2 ( )))
>> {

>This, if anything, is horrible and ugly to read. I'm no big fan
>of assignments in if()s, and even less of function calls in if()s.

OK, but now you're not talking about exceptions/error codes any more.
I could just as easily write:

oe = func1 ( );
if (!oe)
{
oe = func2 ( );
if (!oe)
{

...and I would still think it better than the original example. I
happen to think this last example is a waste of space, but that's
only a preference. (In some cases SymC++ will warn about assignments
inside 'if' statements, and I generally work around it with code like
this last example.)

With regard to this last example, you could also make the case that
the local optimizer will have less of an opportunity to make the best
use of scratch registers, but only in a small way.

>>This code documents its flow-control with its indentation. You don't
>>need comments in big capital letters "SUCCESS" because the shape of
>>the code leads your eye straight to the success and failure.

>Uh, I think those SUCCESS and RECOVER comments were placeholders
>for the actuall success and recover code...

Well, of course they were, but that wasn't the point. Those
place-holders were in caps to point out the fact that they were
difficult to find, especially if they had not been in caps (as code
would not be). My point was that with a nested code structure, you
don't need to draw attention anywhere, because the readers attention
flows along with the code.

Pete Gontier

unread,
Nov 13, 1993, 12:03:58 PM11/13/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>> oExpn status = SomethingThatMightFail ( );
>> if (status) /* recover */ ;

>And, in fact, you're re-implementing the exception mechanism
>all by yourself.

I don't get it. I would still have to provide 'oExpn' even if I were
using C++ exceptions. I'm not using native error codes. Are you
saying that if I threw a C++ exception with an 'oExpn' object I would
have re-invented exceptions>

>I'm sure you have your own strcpy() and memset as well!

>>Of course, but there are *so few* operations which need pre-flighting


>>this way. This business of asking the user whether to save the

> [groupware-sounding example deleted]

I didn't see how this example is done better through exceptions.

>I use exceptions for global communications, and it works for me.
>You use error classes for global communications, and it works
>for you. Never the twain shall meet.

Whoa! This is relativism. If we're going to have a public argument, I
don't expect you to back out with "I'm OK, you're OK." :-)

>>closing document is the only one I can think of off the top of my
>>head. It's been a while since I wrote app-level code, so maybe I'm
>>just forgetting. I'd love to hear more examples.

>I could give you 20, except then I would violate the non-announcement
>clause of my contract ;-)

Well, make one up. Or generalize one of the ones you're dealing
with right now so it's not recognizable.

>>of intelligent entity telling the story of how it runs; exceptions
>>are like an attack on its mind's language center that causes it to
>>stutter and skip parts of the story, to the confusion of its
>>listeners (that's you and me, the programmers). That's what I'm
>>talking about when I speak of obfuscated control flow.

>You're not gonna make it in the world of hypertext media,
>that's for sure ;-)

Heh. :-) But should a programming environment by hyper-textual? Do
you like the way code can be just anywhere in a HyperCard stack? I
didn't think so -- nobody does. It has power, but it also has severe
drawbacks, and most people curse it more than they praise it, if
they've actually used it.

>>>I disagree. I like indented code that makes control strucctures
>>>easy to read, but if you have to group 15 system calls, you wander
>>>off to the point where your code starts 60 positions in on the page.

>>And that's when I start to think maybe I am writing a
>>badly-thought-out function and maybe I ought to think about ways to
>>break the thing up. It always turns out that there is a better way.

>Ever built a mildly complicated AppleEvent? Probably not.

Well, that's one of the things that's wrong with AppleEvents, IMO.
But yes, I have built a fairly complicated AppleEvent, and to build
one such event I generally write lots of subroutines.

>>>(more seriously, ALWAYS use a block after conditional structures!)

>>Oh, people are always saying that because they are forced to work
>>with stupid programmers who only claim they know C-like syntax. I

>Your hacker-whacker attitude is shining through!

I don't think so. Maybe my language-lawyer attitude is shining
through. ("Whacker"? You been hanging out at the U. of Maryland? :-)
I believe people should know their tools backwards and forwards.
Until there's viable GUI programming, that is. :-)

>Well, I only started demanding [a protected MacOS] in 1989 or so.


>Then when I bought Jasik's, I had already gotten an 040 machine. Oh,
>well.

Hey man, that's what your wife's Duo is *really* for. She thinks you
got it for her to write with when in fact you really got it to run
Jasik. :-)

>>>2) In drop folders, you might not be able to get file info on
>>>something, but you might still be able to create something that is
>>>not there.

>>Ah, true. Very interesting thought. Very special-case, though,
>>because Standard File won't browse the location. Your program would

>I live by special cases. As soon as I want ANYTHING done I have
>to work with special cases. It might be the nature of my work area?
>Or just that I, as I go thinking about these kinds of things, tend
>to naturally group them into exceptions? :-)

Well, do remember, you're involved with groupware (I gather from your
examples), so AppleShare permissions are central to you. Most other
folks don't even know they exist. I imagine there's an area of the OS
like that for every project, if not more than one. My point was,
though, that I would rather do an explicit check for a piece of
information than rely on interpeting an error code. Here's a better
example: lots of folks will get a list of files in a given directory
with indexed calls to PBGetCatInfo, and when it returns fnfErr, they
stop. What I do instead is to *find out* how many files there are in
that directory (with another call to PBGetCatInfo before the loop)
and then make exactly that many indexed calls to get the directory
entries. This technique is hardly rocket science, but it *is*
illustrative.

Dana S Emery

unread,
Nov 13, 1993, 3:47:04 PM11/13/93
to
In article <gurgleCG...@netcom.com>, gur...@netcom.com (Pete Gontier)
wrote:
>
> My point was that with a nested code structure, you
> don't need to draw attention anywhere, because the readers attention
> flows along with the code.

Until the eyes glaze over and a typo occurs: [1, 2, 3, many] is a
very real cognitive limit for most of us, and mixing indentation
for recovery with indentation for other reasons (looping, blocks)
is guaranteed to put code into the [many] category, further, in
order to avoid going right too quickly one has to go down instead,
and I like to have single screen provcedures as much as possible,
so I dont like gratuitously vertical coding styles.

Maybe you have a 21" monitor for when you need to do line/line
right set comments, we dont all have that luxery.

I vote for the exceptions, I have unfond memories of maintenance of
nested error recovery such as you espose which ran on for several
pages (dont blame me for the original code, I rewrote it as soon
as I could figure out what the heck was going on).

IMHO, any coding technique which enables individual procedures to
fit onto a single screen has considerable virtue.

My experience (1968..present) includes a variety of languages and
programming styles: Forgo, Fortran II & IV, Basic, TECO, Assembler,
Structured Assembler (macros), Algol, Pascal, Mary, C, C+-. I
looked into ((((lisp)))) and ran away in fear.

IMHO an exception mechanism is a good thing.
--

Dana S Emery <de...@umail.umd.edu>

Peter N Lewis

unread,
Nov 14, 1993, 12:48:23 AM11/14/93
to
wer...@soe.berkeley.edu (John Werner) writes:

>void bigfunc()
>{
> try {
> func1();
> func2();
> func3();
> /* SUCCESS */
> }
> catch {
> /* RECOVER */
> }
>}

Thats all very well and good, but how often is the recover section
independent of how far you successfully got? Lets take a simple
example of copying a file, you need to open two files, loop around
copying the file, and then close both (Pete Resnick: shut up about
IOCompletions ok! :-). Now you have to close each file if and only
if the file was succesfully opened. Not closing a file is bad, closing
it twice is much worse. Also, when closing the read file, you can junk
the error, but when closing the write file, you can't.

This can easily be done using nested if's. It can probably be easily
done using exceptions if you write several different levels of try/catches,
but I'm suspicios it wouldn't be clear.

Personally, I'm still undecided about exceptions, the only experience I
have about them is with a compiler I wrote many years back, that had
exceptions like:

begin
code
except
case exception of
...
end-case;
end-begin;

They were useful in some ways, and I miss not having them, but I'm still
not certain which method is clearer. Conceptially, exceptions should be
clearer, but in practice, it means the error caused by one chunk of code
is dealt with quite far away, and exceptions tend to be grouped together
(as above) where the exception really needs to be seperate so that it
knows what failed.

There is only one solution I can see to the above problem - and that is
to hide the whole thing in a CopyFile procedure in a library or in the OS,
and forget about it :-)
Peter.

--
_______________________________________________________________________
Peter N Lewis <peter...@info.curtin.edu.au> Ph: +61 9 368 2055

Chris Verret

unread,
Nov 14, 1993, 1:23:26 PM11/14/93
to
In article <gurgleCG...@netcom.com>, gur...@netcom.com (Pete Gontier)
wrote:

> wer...@soe.berkeley.edu (John Werner) writes:
>
> >OSErr bigfunc()
> >{
> > ...

In this last example both error-handling flow-control and algorithm
flow-control are extremely intertwined. If the algorithm needs a few
if statements of its own, then the code would really not be very clear
(use of 2 kinds of if's, indentation,...).

Error-handling and algorithm should be separated as much as possible.
At this moment, I'm trying out the techniques in the "Exceptions.h"
file, I found at ftp.apple.com. It's based upon the following simplified
example (note that I left out the '\' for ease & elegance):

#define require(assertion, exception)
{
if (assertion) ;
else {
DebugStr("\pAssertion \"" #assertion "\" failed" ";G");
goto exception;
}
}


OSErr AnotherBigFunc()
{
OSErr error;

require( error = func1() , noFunc1 );
require( error = func2() , noFunc2 );
require( error = func3() , noFunc3 );

noFunc3:
; // recover from func3
noFunc2:
; // recover from func2
noFunc1:
; // recover from func1
return error;
}

I think I like it, but I was not able to make the included dprintf dcmd
work (did anyone else succesfully use this "Exceptions.h" in THINK C ?).

--
__________________________________________________________________________
Chris Verret cve...@vnet3.vub.ac.be

Dana S Emery

unread,
Nov 14, 1993, 4:02:11 PM11/14/93
to
In article <2c4gr7$e...@ncrpda.curtin.edu.au>, pe...@ncrpda.curtin.edu.au

(Peter N Lewis) wrote:
>
>
> Thats all very well and good, but how often is the recover section
> independent of how far you successfully got?

Surprisingly often, when one is working with well designed objects.

void FileCopyDataFork(TYourFile source, TYourFile target)
{
long preflightSpace = source->GetSize();

if (target->HasRoom(preflightSpace)){
Failure(kErrorNoRoomOnTargetDisk));
}
try{
source->OpenToRead();
target->OpenToWrite(preflightSpace);
while (source->HasSome()){
source->ReadSome(...);
target->WriteSome(...);
}
target->Flush();
}catch{
target->KillFileSilently(); // smart, only kills if exists.
}
source->Close(); // smart, tracks "openness", closes only if open
target->Close(); // any failure here will be handled by callers
catch.
}

> Now you have to close each file if and only
> if the file was succesfully opened.

Yes, as far as the OS is concerned, but nothing prevents you
from working at a higher level of abstraction provided you do
it intelligently. In this case it helps to give the TYourFile
object enough smarts so that it knows when to forward a close
request to the system, and when to ignore the request (silently)
when the file wasnt open.

A semaphore instance variable does the trick nicely.
See the TCL CFile class for an example.

Jon Wätte

unread,
Nov 14, 1993, 5:58:52 PM11/14/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>Ever built a mildly complicated AppleEvent? Probably not.

>Well, that's one of the things that's wrong with AppleEvents, IMO.
>But yes, I have built a fairly complicated AppleEvent, and to build
>one such event I generally write lots of subroutines.

Oh, so THERE it's okay to jump to TOTALLY IRRELEVANT parts of the
code in a wickedly twisted code flow? But for exceptions it isn't?
Subroutine calls are not any more "local" than exception handlers.

>>>>(more seriously, ALWAYS use a block after conditional structures!)

>>>Oh, people are always saying that because they are forced to work
>>>with stupid programmers who only claim they know C-like syntax. I

>>Your hacker-whacker attitude is shining through!

>I don't think so. Maybe my language-lawyer attitude is shining

A language shouldn't need lawyers. This is expecially true for
English :-) (C++ isn't the right laguange...)

A good language is clear and obvious no matter how you write it
within the constraints of the language. LISP is OK, for instance.
SmallTalk isn't bad. C sucks, in this regard.

>I believe people should know their tools backwards and forwards.

I believe people shouldn't be forced to memorize more than they
can easily keep in their spine while using the gray matter for work.
a statement like:

err = read_file ( whatever ) ;
if ( err )
return ;
...

is very dangerous, because when you insert a close_file in the
error handler, the semantics of the control flow changes:

if ( err )
close_file ( whatever ) ;
return ;

I've seen enough errors like this in commercial and net code
(like UNIX SVR{2,3,4}, NetHack and other places) and also in my
own early code that I now ALWAYS use braces and spaces wherever
legal and natural. Guess what? I don't make this error anymore.
Because this practice provides for easy maintainability and
decreases the number of and chances of bugs, it should be
mandatory.

>>Well, I only started demanding [a protected MacOS] in 1989 or so.
>>Then when I bought Jasik's, I had already gotten an 040 machine. Oh,
>>well.

>Hey man, that's what your wife's Duo is *really* for. She thinks you
>got it for her to write with when in fact you really got it to run
>Jasik. :-)

Actually, my wife has my 170, which doesn't take more than 8 MB of
RAM. I have an eye towards the 270c, because at MacHack everyone
could hear me saying "gee, that's a nice 180c you've got there.
Gosh. I wish I had one. But that floppy weighs, and I really don't
use it all that much. Hmm, no, I'll wait for an active-color Duo."
I'm surprised at the speed of response with Apple's hardware
development :-)

>>>>2) In drop folders, you might not be able to get file info on
>>>>something, but you might still be able to create something that is

>>>Ah, true. Very interesting thought. Very special-case, though,

>>I live by special cases. As soon as I want ANYTHING done I have

>Well, do remember, you're involved with groupware (I gather from your


>examples), so AppleShare permissions are central to you. Most other

Yes, and Exceptions is (one) way of managing them. There are other
areas where the same thing would apply.

>with indexed calls to PBGetCatInfo, and when it returns fnfErr, they
>stop. What I do instead is to *find out* how many files there are in
>that directory (with another call to PBGetCatInfo before the loop)
>and then make exactly that many indexed calls to get the directory
>entries. This technique is hardly rocket science, but it *is*
>illustrative.

It's also seriously problem-ridden, since the number of files may
CHANGE before you're at the end of the list, and calling PBGetCatInfo
on a file that's not there takes FOREVER. The way I do it, I always
take one miss. The way you do it, you might miss files, or get 100
misses. What you REALLY should do, is work the catalog in tandem, so
you re-read your last file right AFTER you read the next file, to
check that it's still there and has the same name. If you find an
inconsistency, you start over from the beginning. This is the way to
go if correctness is important.

Cheers,

/ h+


--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

Not speaking for the Liberian Government.

Ron_Hu...@bmug.org

unread,
Nov 14, 1993, 12:52:48 AM11/14/93
to
To: Pete Gontier,gur...@netcom.com,UUCP

>These are apples and oranges. Definitely C++ gives you "better" data

>types than FORTRAN. But data types != exceptions. Exceptions give
>C++ a big fat GOTO statement. Sounds like FORTRAN to me...

Huh? I didn't say they were.

The only proper use of GOTO is for handling errors, and I'd just
as soon have it formalized.

>> In C++ without exceptions, you are forced to ... do a little bit of
>> work; check for errors; do a tiny little bit more; check again; keep
>> on plodding. Worse yet, when you encounter an error, you have to
>> handle it right away,
>

>I do recovery immediately and reporting at the top of the call tree
>(on the Mac, in the main event loop).

But that's the point! You can't always do recovery immediately, because
at the point in the code where you detect the error, you don't always


know what to do about it.

>> or pass an error code back to your caller, who


>> has to check again for an error that you have already detected. You
>> wind up doing more checking than work.
>

>I am unclear why people are concerned about this. I do wish C++ had
>a function modifier which *forced* callers to pay attention to return
>values (error codes). But I trust myself not to ignore them.

Because I have more useful things to send back as a return value than
an error code.

Let's take an example. Suppose I define a class BigInt that provides
arbitrarily large integer arithmetic. I want it to behave like the
other integer types, except that it allocates memory dynamically rather
than overflowing.

I would like to write:

A = B * (C + D);

where A, B, C, and D are BigInts. There is another thread where the
same expression comes up with Matrices, and the hapless soul who
ventured to suggest

A = C; A += D; A.MultipyLeft(B);

was severely (too severely, IMHO) criticized. But you would have me
write:

err = (A = C);
if (err) { ... }
else {
err = (A += D);
if (err) { ... }
else {
err = (A *= B);
if err { ... }
else ... }}}

Now, the way I would like to write it makes it clear what is going on.
In your form, all the real work is buried inside all the error testing,
and I'm likely to make errors BECAUSE of the error-handling code that
I wouldn't have made if I could see what I was doing.

Besides, I'd like my BigInt operators to actually return the resulting
BigInt, instead of an error code. Something like 'err = (A = C);' is
something I really would not like to see in one of my own programs.

>If your concern is performance, that doesn't exactly fly, either,
>because most programs are i/o and memory-allocator bound and those
>that aren't go off and do long computations which for the most part
>can't generate errors.

What makes you think long computations can't generate errors? Even
discounting overflow, underflow, divide by zero, entry-not-found-in-
table, table-full, range-error, and the like, there is still
assertion-error, which I have learned to respect as an honest-to-
goodness exception.

>>With exceptions, you can say: Do all of this, and if anything goes wrong
>>anywhere, here's what I want you do. The error handling is all in one
>>place, instead of being scattered around.
>

>Yeah, but in *which* place? It's impossible to conclusively tell
>where a thrown exception goes without a grep. Unless you like to
>memorize these things.

Why do you care? It's impossible to conclusively tell where a
returned error code is handled without a grep. Unless you like to
memoryize these things.

>Besides, I don't see where you get the idea exceptions are handled in
>any fewer places than error codes. Are you saying your strategy is to
>catch all exceptions in one place? Shades of ON ERR GOSUB... But I

>don't think this is what you mean, which is exactly my point --
>exception catching isn't done in any fewer places than error code
>"parsing".

But it is. See my example with the BigInts. An error code has to
be tested after each individual operation. A single exception handler
can handle errors from more than one call.

>>More importantly, the error can be detected where it happens, and then
>>handled where you know what to do about it. These are not generally the
>>same place.
>
>Nothing stopping you from doing the same with error codes. Bonus:
>you get to see the recovery logic *right in the source code*. No
>jumps to some other place you can't see at the moment. High tech,
>eh?

Bonus? You HAVE to see the recovery logic *right in the source code*
even if you would prefer to have a little separation so the source
would be more readable. Actually, what you get to see in the source
code is the contorted control flow needed to interleave the normal
path and the exception path. Notice that the BigInt example is
already unreadable even though I have only shown the code that
re-discovers the exception that was already detected inside the
BigInt methods. You still need to add the code that actually
handles the error. (So do I, of course, but I only have to add it
in one place. You have to add it in three places.)

>People are always worried about performance. Bah! If you've got a

>class where inline functions and small oft-called functions are a
>significant win, odds are you're looking at a numeric class or
>something similar which can't generate errors (maybe they blow up if

>you try to divide by zero, but that happens with integers, too, and
>nobody expects a C++ exception from that).

*I* care about performance! What is the point of getting faster and
faster hardware if the programmers fritter away the power by getting
less and less concerned about performance? And *users* care about
performance! They will go out and pay big bucks for accelerator
boards that improve performance by 30%. Heck, that kind of performance
gain is trivial for a programmer to achieve, if he will just stop
being so proud of not caring about efficiency.

And as I'm sure you realize by now, numeric classes can generate
errors (BigInt is very likely to generate a noMemory error), and if
I implement the class in C++, I most certainly do expect it to
generate a C++ exception (as soon as those are implemented).

I believe that efficiency should be the last thing a programmer thinks
about. That is:

a) Think about other things first. Correctnes and robustness are
much more important. It doesn't matter how quickly you get the
wrong answer. Or how quickly you can crash.

b) Do think about efficiency eventually. You can put it at the
end of the list of things you care about, but do not take it
off that list.

I want exceptions mainly for robustness and correctness. Workarounds
that are inefficient are not attractive, but my real concern is that
some of the workarounds (like error result codes) are actually more
likely to introduce programming errors than to catch the errors they
are supposed to be handling. And in some cases (exceptions inside
constructors, for example) the workarounds simply are not there.

Until we have them, I will muck along as best I can. I *do* check for
errors (all of them) in my code, because I'm a firm believer in Murphy,
and I know that any error I don't check for is going to kill me.
But I'm getting pretty tired of all the work it takes. I'd like a
more elegant way to do it.

-Ron Hunsinger


Joseph Hall

unread,
Nov 15, 1993, 11:23:19 AM11/15/93
to
In article <1993Nov13.2...@bmug.org> Ron_Hu...@bmug.org writes:
>To: Pete Gontier,gur...@netcom.com,UUCP
>
>>These are apples and oranges. Definitely C++ gives you "better" data
>>types than FORTRAN. But data types != exceptions. Exceptions give
>>C++ a big fat GOTO statement. Sounds like FORTRAN to me...
>
>Huh? I didn't say they were.
>
>The only proper use of GOTO is for handling errors, and I'd just
>as soon have it formalized.

Not!

What about lexers & other state machines, machine-generated code, ...

--
Joseph Nathan Hall | I may be indispensable, but I am probably not
Software Architect | irreplaceable.
Gorca Systems Inc. | jos...@joebloe.maple-shade.nj.us (home)
(on assignment) | (602) 732-2549 (work) Josep...@sat.mot.com

Pete Gontier

unread,
Nov 15, 1993, 11:18:00 PM11/15/93
to
Pete Gontier,gur...@netcom.com, writes

>These are apples and oranges. Definitely C++ gives you "better" data
>types than FORTRAN. But data types != exceptions. Exceptions give
>C++ a big fat GOTO statement. Sounds like FORTRAN to me...

Ron_Hu...@bmug.org writes:

>Huh? I didn't say they were.

That's because there are two threads on this topic currently, and
I was replying to the other thread.

>The only proper use of GOTO is for handling errors, and I'd just
>as soon have it formalized.

"proper use of GOTO"? That's why I said this discussion was useless
in another post. I have yet to find an exception advocate who eschews
GOTO.

>>> In C++ without exceptions, you are forced to ... do a little bit of
>>> work; check for errors; do a tiny little bit more; check again; keep
>>> on plodding. Worse yet, when you encounter an error, you have to
>>> handle it right away,
>>
>>I do recovery immediately and reporting at the top of the call tree
>>(on the Mac, in the main event loop).

>But that's the point! You can't always do recovery immediately, because
>at the point in the code where you detect the error, you don't always
>know what to do about it.

Yes, and in the other thread I agreed with this. My point was that it
doesn't make any difference whether you are using exceptions or error
codes; flexibility is not the issue.

>>> or pass an error code back to your caller, who
>>> has to check again for an error that you have already detected. You
>>> wind up doing more checking than work.
>>
>>I am unclear why people are concerned about this. I do wish C++ had
>>a function modifier which *forced* callers to pay attention to return
>>values (error codes). But I trust myself not to ignore them.

>Because I have more useful things to send back as a return value than
>an error code.

Why does it have to be in the return value? Why not pass a reference
to the variable you want passed back? I have a feeling this has to do
with your example...

>Let's take an example. Suppose I define a class BigInt that provides
>arbitrarily large integer arithmetic. I want it to behave like the
>other integer types, except that it allocates memory dynamically rather
>than overflowing.

This is another problem I have with C++, people using the operator
overloading facilities to make some operators throw exceptions and
others not. This prevents C++ from fulfilling one of its design
goals, which was to look pretty much like C. Whether that's a good
design goal or not, there it is. I happen to think it was a good
design goal. If you don't, that's fine, but I think you'll be heading
off toward another dialect of C++ that Bjarne has never heard of.

>Now, the way I would like to write it makes it clear what is going on.
>In your form, all the real work is buried inside all the error testing,
>and I'm likely to make errors BECAUSE of the error-handling code that
>I wouldn't have made if I could see what I was doing.

Except that I would never have arithmetic cause an error.

>>If your concern is performance, that doesn't exactly fly, either,
>>because most programs are i/o and memory-allocator bound and those
>>that aren't go off and do long computations which for the most part
>>can't generate errors.

>What makes you think long computations can't generate errors? Even
>discounting overflow, underflow, divide by zero, entry-not-found-in-
>table, table-full, range-error, and the like, there is still
>assertion-error, which I have learned to respect as an honest-to-
>goodness exception.

Yes, but does handling these few rare errors really put a crimp in
performance? I have a feeling it doesn't. At least not more than the
overhead associated with a C++ exception mechanism.

>>>With exceptions, you can say: Do all of this, and if anything goes wrong
>>>anywhere, here's what I want you do. The error handling is all in one
>>>place, instead of being scattered around.
>>
>>Yeah, but in *which* place? It's impossible to conclusively tell
>>where a thrown exception goes without a grep. Unless you like to
>>memorize these things.

>Why do you care? It's impossible to conclusively tell where a
>returned error code is handled without a grep. Unless you like to
>memoryize these things.

You're right, I should not have made this claim. The claim I should
have made was that with exceptions you are forced to do a complex,
difficult grep and even then you might not find what you are looking
for.

>>Besides, I don't see where you get the idea exceptions are handled in
>>any fewer places than error codes. Are you saying your strategy is to
>>catch all exceptions in one place? Shades of ON ERR GOSUB... But I
>>don't think this is what you mean, which is exactly my point --
>>exception catching isn't done in any fewer places than error code
>>"parsing".

>But it is. See my example with the BigInts. An error code has to
>be tested after each individual operation. A single exception handler
>can handle errors from more than one call.

That's true. I have to give you that. Can't say it outweighs the
other considerations I have outlined, but there it is.

>>>More importantly, the error can be detected where it happens, and then
>>>handled where you know what to do about it. These are not generally the
>>>same place.
>>
>>Nothing stopping you from doing the same with error codes. Bonus:
>>you get to see the recovery logic *right in the source code*. No
>>jumps to some other place you can't see at the moment. High tech,
>>eh?

>Bonus? You HAVE to see the recovery logic *right in the source code*
>even if you would prefer to have a little separation so the source
>would be more readable.

But that's what I'm arguing. If you think algebra notation is more
readable, fine. I think a control flow that shows what the program
actually does is more readable. I've attempted to debug
several-hundred-thousand line programs that used exceptions. Totally
confusing. Took me a looooong time to resolve problems, because I was
constantly having to do weird system-wide searches to figure out the
control flow. And of course none of the comments about which
exception happened where were accurate -- nor should I have expected
them to be.

>Actually, what you get to see in the source
>code is the contorted control flow needed to interleave the normal
>path and the exception path.

Aha! You distinguish between the two! You must really dig algebraic
notation.

>Notice that the BigInt example is
>already unreadable even though I have only shown the code that
>re-discovers the exception that was already detected inside the
>BigInt methods. You still need to add the code that actually
>handles the error. (So do I, of course, but I only have to add it
>in one place. You have to add it in three places.)

No, most of my errors propagate a fair distance up the call chain
and are handled pretty much in centralized areas.

>>People are always worried about performance. Bah! If you've got a
>>class where inline functions and small oft-called functions are a
>>significant win, odds are you're looking at a numeric class or
>>something similar which can't generate errors (maybe they blow up if
>>you try to divide by zero, but that happens with integers, too, and
>>nobody expects a C++ exception from that).

>*I* care about performance! What is the point of getting faster and
>faster hardware if the programmers fritter away the power by getting
>less and less concerned about performance? And *users* care about
>performance! They will go out and pay big bucks for accelerator
>boards that improve performance by 30%. Heck, that kind of performance
>gain is trivial for a programmer to achieve, if he will just stop
>being so proud of not caring about efficiency.

This is the standard, tired old performance argument. Of course
performance is important. But reread what I wrote -- hint: the key
word to notice is "If" at the beginning of the third sentence. In
your case (and that of many others), you'll have to overlook the part
about arithmetic classes not producing errors.

>I want exceptions mainly for robustness and correctness. Workarounds
>that are inefficient are not attractive, but my real concern is that
>some of the workarounds (like error result codes) are actually more
>likely to introduce programming errors than to catch the errors they
>are supposed to be handling.

Which, ironically, is precisely the way I view exceptions.

>And in some cases (exceptions inside
>constructors, for example) the workarounds simply are not there.

Which is why none of my constructors can fail. Whoever heard of
a variable declaration causing an error? Why would I want this?

>Until we have them, I will muck along as best I can. I *do* check for
>errors (all of them) in my code, because I'm a firm believer in Murphy,
>and I know that any error I don't check for is going to kill me.
>But I'm getting pretty tired of all the work it takes. I'd like a
>more elegant way to do it.

I don't think of it as work any more than any of the rest of coding.

Pete Gontier

unread,
Nov 15, 1993, 11:42:34 PM11/15/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>Ever built a mildly complicated AppleEvent? Probably not.

In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>...yes, I have built a fairly complicated AppleEvent, and to build


>>one such event I generally write lots of subroutines.

<Jon again...>

>Oh, so THERE it's okay to jump to TOTALLY IRRELEVANT parts of the
>code in a wickedly twisted code flow? But for exceptions it isn't?
>Subroutine calls are not any more "local" than exception handlers.

Yes, they are. When you call a function (or return from one) there is
a specific static relationship between two entities in the
programming system. When you raise an exception, there is no such
relationship -- that is the whole point of exceptions! Could this be
used as an argument against virtual functions? Partially. But at
least with virtual functions you're still only talking about two
entities in the programming system. (Same goes for function pointers
and all similar constructs.) With exceptions, you don't know how many
-- again, precisely the point of the exception mechanism.

>A language shouldn't need lawyers. This is expecially true for
>English :-) (C++ isn't the right laguange...)

Well, we knew that, but it's what we have. Now let's keep the fight
over exceptions going until someone with a lot of development clout
and know-how pities us and ships MacEiffel.

>a statement like:

> err = read_file ( whatever ) ;
> if ( err )
> return ;
> ...

>is very dangerous, because when you insert a close_file in the
>error handler, the semantics of the control flow changes:

> if ( err )
> close_file ( whatever ) ;
> return ;

See, I couldn't possibly write that. It looks wrong. I don't even
have to focus my eyes to tell you it looks wrong.

>I've seen enough errors like this in commercial and net code

>(like UNIX SVR{2,3,4}, NetHack and other places)...

Hey, bad programmers write bad code. The language is irrelevant.
Fire them if they do it too often.

>and also in my
>own early code that I now ALWAYS use braces and spaces wherever
>legal and natural.

Early code or early C code? And when you were writing this early C
code, was it considered production code? If so, some manager made a
mistake. Hell, I'm writing production C++ code as I learn C++, and
in this case the manager is me, and I am making a mistake. Oh well.
That doesn't mean I'm going to adopt artifical constraints on my use
of the language, though, because then I would not learn it well.
What does all this mean? I dunno, I am too tired to analyze it and
I have probably just thrown myself into a spiked pit of discourse.

>>with indexed calls to PBGetCatInfo, and when it returns fnfErr, they
>>stop. What I do instead is to *find out* how many files there are in
>>that directory (with another call to PBGetCatInfo before the loop)
>>and then make exactly that many indexed calls to get the directory
>>entries. This technique is hardly rocket science, but it *is*
>>illustrative.

>It's also seriously problem-ridden, since the number of files may
>CHANGE before you're at the end of the list, and calling PBGetCatInfo
>on a file that's not there takes FOREVER. The way I do it, I always
>take one miss. The way you do it, you might miss files, or get 100
>misses.

What I didn't say was that I would do the quite reasonable thing of
stopping if I got any sort of error. I assumed you would understand.
My fault. Don't see how I would miss files, though.


>What you REALLY should do, is work the catalog in tandem, so
>you re-read your last file right AFTER you read the next file, to
>check that it's still there and has the same name. If you find an
>inconsistency, you start over from the beginning. This is the way to
>go if correctness is important.

Not a bad addition to the algorithm. Doubles your system calls,
though, and that can cost a lot if all you are doing is indexing the
files to change one bit and reset the file's info. This is also
mostly a concern, again, for people who do a lot of work in
AppleShare volumes. Which is groupware folks and utilities folks.

Pete Gontier

unread,
Nov 15, 1993, 11:49:31 PM11/15/93
to
de...@umail.umd.edu (Dana S Emery) writes:

>In article <gurgleCG...@netcom.com>, gur...@netcom.com (Pete Gontier)
>wrote:
>>
>> My point was that with a nested code structure, you
>> don't need to draw attention anywhere, because the readers attention
>> flows along with the code.

>Until the eyes glaze over and a typo occurs: [1, 2, 3, many] is a
>very real cognitive limit for most of us, and mixing indentation
>for recovery with indentation for other reasons (looping, blocks)
>is guaranteed to put code into the [many] category,

Hmmm. I top out at about 7, I guess, like a phone number, and I
suspect you probably top out somewhere up there, as well. But the
point is not how many -- the point is that the number is finite, and
I agree. And I believe that later in the very same post I did say
that when the nesting gets too deep I start to consider the
possibility that I didn't think out the function well enough before I
started typing and I generally break it up into two or more
functions.

>further, in
>order to avoid going right too quickly one has to go down instead,
>and I like to have single screen provcedures as much as possible,
>so I dont like gratuitously vertical coding styles.
>Maybe you have a 21" monitor for when you need to do line/line
>right set comments, we dont all have that luxery.

I had a 19" monitor once and I found that I had to move my head a lot
to see what was going on. Didn't like that. I traded the 19"er to
someone for an old Apple 13" RGB and we were both happy campers.
Mary's problem got neck-strain by now, and she'll probably palm the
thing off to someone else...

>I vote for the exceptions, I have unfond memories of maintenance of
>nested error recovery such as you espose which ran on for several
>pages (dont blame me for the original code, I rewrote it as soon
>as I could figure out what the heck was going on).

But that's entirely possible with exceptions, too. Bad programmers
write bad code in any language.

>IMHO, any coding technique which enables individual procedures to
>fit onto a single screen has considerable virtue.

I agree, but exceptions aren't particularly enabling in that dep't.

>My experience (1968..present) includes a variety of languages and
>programming styles: Forgo, Fortran II & IV, Basic, TECO, Assembler,
>Structured Assembler (macros), Algol, Pascal, Mary, C, C+-. I
>looked into ((((lisp)))) and ran away in fear.

TECO! My God! You guys are getting to be as rare as WWII vets!
I'm very impressed you've come to Macintosh. Congrats. :-)

Pete Gontier

unread,
Nov 15, 1993, 11:53:31 PM11/15/93
to
cve...@vnet3.vub.ac.be (Chris Verret) writes:

>In this last example both error-handling flow-control and algorithm
>flow-control are extremely intertwined.

>...<strategic rearrangement>...

>Error-handling and algorithm should be separated as much as possible.

As I posted in another message, the fact that you see a difference
speaks volumes as to why you advocate excpetions.

>If the algorithm needs a few
>if statements of its own, then the code would really not be very clear

And if the 'if' statements get too thick, I break up the function. I
think it could stand a couple more 'if's, though.

>(use of 2 kinds of if's, indentation,...).

Heh. :-)

Pete Gontier

unread,
Nov 15, 1993, 11:58:20 PM11/15/93
to
de...@umail.umd.edu (Dana S Emery) writes:

>Yes, as far as the OS is concerned, but nothing prevents you
>from working at a higher level of abstraction provided you do
>it intelligently. In this case it helps to give the TYourFile
>object enough smarts so that it knows when to forward a close
>request to the system, and when to ignore the request (silently)
>when the file wasnt open.

You're setting my teeth on edge, Dana. This technique also allows
bugs to be swallowed silently.

I'll abstract a file, sure, but if it gets closed when it's already
been closed or never been opened, generally I call a routine called
Low_Panic which calls DebugStr and ExitToShell.

If you carry your abstraction logic too far, you start making your
memory allocation package silently do nothing when passed 0 pointers,
and 0 pointers then sometimes persist for a long time in your program
until their source can't be found.

Reid Ellis

unread,
Nov 15, 1993, 9:35:33 AM11/15/93
to
John Werner <wer...@soe.berkeley.edu> writes:
|void bigfunc()
|{
| try {
| func1();
| func2();
| func3();
| /* SUCCESS */
| }
| catch {
| /* RECOVER */
| }
|}

Peter N Lewis <pe...@ncrpda.curtin.edu.au> writes:
|Thats all very well and good, but how often is the recover section
|independent of how far you successfully got?

It may not be, in which case you would code something like this:

void bigfunc()
{
try {
func1();
func2();
func3();

// success
}
catch(const char *) {
// do stuff
}
catch(void (*fp)(const char *)) {
// do stuff
}
catch(void *) {
// do stuff
}
catch(RangeErr &r_err) {
// do stuff
}
catch(ArrayErr &a_err) {
// do stuff
}
catch(...) {
// do some stuff
throw;
}
}

The amount of information you have about the error is potentially much
greater with exceptions than with error-checking, since your "catch"
mechanism can take a number of different arguments. As well, you can
more elegantly handle partial recovery, as with the final "catch".

I much prefer reading and working on code with exception handling than
that with old-style error checking.

Reid

P.S. Always using {}'s on conditionals allows other people to be able
to read your code more easily. If this is important to you, you
do it. If it's not, you don't have to.
--
Reid Ellis
r...@utcs.utoronto.ca || r...@Alias.com
CDA...@applelink.apple.com || +1 416 362 9181 [work]

Jon Wätte

unread,
Nov 16, 1993, 6:58:22 AM11/16/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>I've seen enough errors like this in commercial and net code
>>(like UNIX SVR{2,3,4}, NetHack and other places)...

>Hey, bad programmers write bad code. The language is irrelevant.
>Fire them if they do it too often.

That's not always the good thing to do. The German Nazi army always
shot the operative in charge of failed operations, and, as a result,
depleted their own human capital and lost many skilled and caluable
leaders.

More to the point, there may be people with good design skills, but
no particular tendency towards language lawyerism. "C" and its
derivatives (C++) is NOT the right language for these people, but
when you have to put them in the same boat, coding practices like
these help tremendously.

Me, myself, I'm no hot language lawyer; I don't even remember the
priority relationship between ternary and bit ship operators, or
bit shift and bit arithmetic operators. (though I think it is in
the now stated order low-to-high)

>That doesn't mean I'm going to adopt artifical constraints on my use
>of the language, though, because then I would not learn it well.

It's not an "artificial constraint," it's a convention. The
Oxford Dictionary is full of words people don't use; if I felt
the way you do about C about English, y'all would be scrambling
for the dictionary to decode anything I post :-) (And I would
of course have to USE the dictionary to compose a post)

>>It's also seriously problem-ridden, since the number of files may
>>CHANGE before you're at the end of the list, and calling PBGetCatInfo

>What I didn't say was that I would do the quite reasonable thing of


>stopping if I got any sort of error. I assumed you would understand.

WRONG! You may get an error because of, say, AppleShare permissions
or NFS server timeout, while there is still more files to go.

>My fault. Don't see how I would miss files, though.

Simple:

1) Pete gets the number of files in the folder, let's say it's 2.
2) Pete gets info on file 1 in the folder, call it "A"
3) Mary saves her new document in the folder, call it "B"
4) Pete gets info on file 2 in the folder, file "B"
5) Pete now stops, since he thinks there are only two files in the
folder, even though the file "C" which was in there when he
started looking still is un-looked-upon.

>>What you REALLY should do, is work the catalog in tandem, so
>>you re-read your last file right AFTER you read the next file, to

>mostly a concern, again, for people who do a lot of work in


>AppleShare volumes. Which is groupware folks and utilities folks.

WRONG!

ALL applications work (SHOULD work) in an AppleShare environment
these days of Personal File Sharing and campuswide corporate
networks. You say you think of error codes as part of the algorithm,
well, you've better start thinking of concurrency as part of the
algorithm as well! I hope you remember enough graph theory not to
just run away at the sight of the word "deadlock" :-)

And, as you see, now we have BOTH error codes AND concurrecny to
worry about. Soon someone is going to bring up a THIRD thing to
worry about, and then you'll see how exceptions can handle this
much more elegantly than error codes.

Of course, I'm only advocating style, here, we've both shown that
there is a bijection between our styles ;-)


--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

NCC-1701

Ron_Hu...@bmug.org

unread,
Nov 16, 1993, 4:07:55 AM11/16/93
to
> A good language is clear and obvious no matter how you write it
> within the constraints of the language.

A language that prevents unclear expression is too limited to to
express much.

Besides, as you use a language, you develop idioms for handling
frequently recurring tasks. When you encounter one of your idioms,
in your own code or in others', you recognize the idiom, and the
meaning is immediately clear and obvious.

But when you encounter an unfamiliar idiom, the code appears to
be confused and unclear - until you learn the idiom. (Not that all
confused code is idiomatic. Some code really is confused.)

So, is a program that contains unfamiliar idioms, or a language that
permits them, necessarily bad? I would say not, although I might
find fault with particular idioms. I have learned a lot from
struggling to learn new idioms, and I often find that the idiom,
once learned, has more than enough utility to justify the original
effort.

Example: how long did it take you to figure out what is REALLY going
on with C++ stream I/O, especially with manipulators? Yet it's
readable, both before and after you finally understand it. The only
hard part is figuring out why it works. And the concept, once learned,
is widely applicable in other areas that have nothing to do with I/O.


Pete Gontier

unread,
Nov 16, 1993, 9:45:18 PM11/16/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>>I've seen enough errors like this in commercial and net code
>>>(like UNIX SVR{2,3,4}, NetHack and other places)...

>>Hey, bad programmers write bad code. The language is irrelevant.
>>Fire them if they do it too often.

>That's not always the good thing to do. The German Nazi army always
>shot the operative in charge of failed operations, and, as a result,
>depleted their own human capital and lost many skilled and caluable
>leaders.

That's why I said fire them if they do it too often. Or put them onto
another sort of job within the project. Sounds like the Nazis were
killing folks if they screwed up once. I mean, if someone has the
chronic problem of not knowing when curly-braces are appropriate,
should you trust any of the rest of their code?

>More to the point, there may be people with good design skills, but
>no particular tendency towards language lawyerism. "C" and its
>derivatives (C++) is NOT the right language for these people, but
>when you have to put them in the same boat, coding practices like
>these help tremendously.

I would say that if someone is not suited to the chosen language but
has other valuable skills, they should be put into a position where
they can use those valuable skills all the time.

>Me, myself, I'm no hot language lawyer; I don't even remember the
>priority relationship between ternary and bit ship operators, or
>bit shift and bit arithmetic operators. (though I think it is in
>the now stated order low-to-high)

I'm not so good with operator precedence, either, which is why I tend
to over-use parens. You could say that if I were consistent, I would
also tend to over-use curly-braces. But, silly me, I consider
indentation to be part of the language, the same part of the language
curly-braces belong in and regular parens do not. Probably people
will be up in arms about the idea of indentation being part of the
language and somebody will quote the author of 'make' saying it was
stupid to use indentation as part of the language. Oh well.

>>That doesn't mean I'm going to adopt artifical constraints on my use
>>of the language, though, because then I would not learn it well.

>It's not an "artificial constraint," it's a convention.

Sure it is; there's a feature that cannot be used. It cannot be used
because of a convention, but it's still a constraint. In fact,
because it's just a convention, it's specifically an artifical
constraint.

>The Oxford Dictionary is full of words people don't use; if I felt
>the way you do about C about English, y'all would be scrambling
>for the dictionary to decode anything I post :-) (And I would
>of course have to USE the dictionary to compose a post)

Firstly, we were talking about C++, not C, and secondly, YES, I agree
with you, this will definitely be the downfall of C++. In the
meantime, I don't intend to hamstring my use of its features, because
in light of its complexity, not using the features just adds insult
to injury.

>>>It's also seriously problem-ridden, since the number of files may
>>>CHANGE before you're at the end of the list, and calling PBGetCatInfo

>>What I didn't say was that I would do the quite reasonable thing of
>>stopping if I got any sort of error. I assumed you would understand.

>WRONG! You may get an error because of, say, AppleShare permissions
>or NFS server timeout, while there is still more files to go.

And I would call those stopping conditions. Maybe if I were doing a
recursive descent algorithm I would have to handle AFP permissions,
but as I said, that sort of algorithm is mostly in the realm of
groupware and utilities.

>1) Pete gets the number of files in the folder, let's say it's 2.
>2) Pete gets info on file 1 in the folder, call it "A"
>3) Mary saves her new document in the folder, call it "B"
>4) Pete gets info on file 2 in the folder, file "B"
>5) Pete now stops, since he thinks there are only two files in the
> folder, even though the file "C" which was in there when he
> started looking still is un-looked-upon.

This doesn't sound too bad to me. If I'm groupware or a utility, I
need to handle this situation correctly. If I'm a word processor or
most other sorts of apps, I think this problem is acceptably left
unsolved. Unless you can come up with a way data could be lost. I
don't think that's unlikely, I just haven't seen it yet.

>ALL applications work (SHOULD work) in an AppleShare environment
>these days of Personal File Sharing and campuswide corporate
>networks. You say you think of error codes as part of the algorithm,
>well, you've better start thinking of concurrency as part of the
>algorithm as well! I hope you remember enough graph theory not to
>just run away at the sight of the word "deadlock" :-)

>And, as you see, now we have BOTH error codes AND concurrecny to
>worry about. Soon someone is going to bring up a THIRD thing to
>worry about, and then you'll see how exceptions can handle this
>much more elegantly than error codes.

Granted, but I don't think most apps need to care anyway. Prove me
wrong -- I'm eager to see the light.

Peter N Lewis

unread,
Nov 16, 1993, 11:12:02 PM11/16/93
to
gur...@netcom.com (Pete Gontier) writes:

>> if ( err )
>> close_file ( whatever ) ;
>> return ;

>See, I couldn't possibly write that. It looks wrong. I don't even
>have to focus my eyes to tell you it looks wrong.

The reason I always but the {'s in (or begin/end in pascal, although I'm
not as consistent there :-) is to avoid having to add them in when I later
add a line in. If you want to add in the close_file line, you shouldn't
have to add in a bunch of currly brackets as well. In any event, putting
the currly brackets in is a good practice, so you might as well do it...

>>I've seen enough errors like this in commercial and net code
>>(like UNIX SVR{2,3,4}, NetHack and other places)...

>Hey, bad programmers write bad code. The language is irrelevant.
>Fire them if they do it too often.

If you fire programmers for making mistakes like the above, you'd probably
never have any programmers.

Glenn Reid

unread,
Nov 16, 1993, 2:41:12 PM11/16/93
to
Reid Ellis writes

> P.S. Always using {}'s on conditionals allows other people to be able
> to read your code more easily. If this is important to you, you
> do it. If it's not, you don't have to.

I agree. Furthermore, always using {}'s makes the code much more
maintainable. If you add statements within the "if" or "else"
clause, you don't have to remember to add {}'s around the clause if
they're there to begin with. This has bitten MANY people, because
the code compiles fine, but the new statements are not inside the right
clause, even though they may be indented correctly.

Example:

BEFORE:

if ( some_moderately_long_condition )
statement_or_function_call ( with, lots, of, arguments );
some_other_function_call ( with, more, args );

AFTER:

if ( some_moderately_long_condition )
statement_or_function_call ( with, lots, of, arguments );
statement_you_think_is_inside_IF_CLAUSE ( but, it's, not );
some_other_function_call ( with, more, args );

It happens a lot.

--
Glenn Reid gl...@rightbrain.com
Woodside, California
Shameless Plug: buy my book, "Thinking in PostScript"

Jon Wätte

unread,
Nov 17, 1993, 7:10:28 PM11/17/93
to
In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>I'm not so good with operator precedence, either, which is why I tend
>to over-use parens. You could say that if I were consistent, I would

I fully parenthize most of what I do, just as I fully white-space
and brace it :-D

>also tend to over-use curly-braces. But, silly me, I consider
>indentation to be part of the language, the same part of the language
>curly-braces belong in and regular parens do not. Probably people

OOOOHHH - you're just SHOUTING for it :-)

Indentation in nested if() or ternary-operator expressions/statements
have caused a LOT of bugs during the years. It's not how you indent
code, it's what the token stream actually reads like that's important.

Like,

if ( )
if ( )
something
else
something-else

>Firstly, we were talking about C++, not C, and secondly, YES, I agree
>with you, this will definitely be the downfall of C++. In the

Okay, I'll quit this thread now. Thank you. And remember to catch
all those exceptions you throw :-)

>>WRONG! You may get an error because of, say, AppleShare permissions
>>or NFS server timeout, while there is still more files to go.

>And I would call those stopping conditions. Maybe if I were doing a

Huh? Just because some exotic permission scheme doesn't let you
get info about one folder in a folder, but would let you get
info about the next file in the folder, you're skipping that
next file?

>but as I said, that sort of algorithm is mostly in the realm of
>groupware and utilities.

Remind me not to let you write any of my functional specs.

>>1) Pete gets the number of files in the folder, let's say it's 2.
>>2) Pete gets info on file 1 in the folder, call it "A"
>>3) Mary saves her new document in the folder, call it "B"
>>4) Pete gets info on file 2 in the folder, file "B"
>>5) Pete now stops, since he thinks there are only two files in the
>> folder, even though the file "C" which was in there when he
>> started looking still is un-looked-upon.

>If I'm a word processor or


>most other sorts of apps, I think this problem is acceptably left
>unsolved.

You could get a job with certain parts of Microsoft within minutes
of writing that. That's not a compliment.

--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

This article printed on 100% recycled electrons.

Jon Wätte

unread,
Nov 17, 1993, 7:02:11 PM11/17/93
to
In <1993Nov16.0...@bmug.org> Ron_Hu...@bmug.org writes:

>> A good language is clear and obvious no matter how you write it
>> within the constraints of the language.

>A language that prevents unclear expression is too limited to to
>express much.

I just love it when people grab statements out of the air without
supporting evidence, so I can watch them go down in flames :-)

>find fault with particular idioms. I have learned a lot from
>struggling to learn new idioms, and I often find that the idiom,
>once learned, has more than enough utility to justify the original
>effort.

Except, of course, to the language-lawyer types who state that
you should know all aspects of the language and apply them
to anything you do...

Pete Gontier

unread,
Nov 18, 1993, 12:51:21 PM11/18/93
to
pe...@ncrpda.curtin.edu.au (Peter N Lewis) writes:

>>Hey, bad programmers write bad code. The language is irrelevant.
>>Fire them if they do it too often.

>If you fire programmers for making mistakes like the above, you'd probably
>never have any programmers.

I never said there were very many good programmers in the universe.
:-) Besides, "too often" leaves the whole thing open to
interpretation. Whatever limit a given manager feels is right.
Probably there is an optimal absolute quantifier, but I haven't the
first clue what it is. My point was that leaving out curly braces is
a stupid mistake that ought to cast aspersions on the rest of the
programmer's code.

The last time I said this, someone got hot under the collar because
*he* made the mistake occaisionally until *he* started to put
curly-braces around every predicate. So, of course, *everyone* must
make that mistake on a regular basis. Me, I haven't made that mistake
for, say, six years? That's probably an underestimate. It must have
been right about the time I decided that, for the programmer, white
space is part of the language. (Yes, Jon, I know the compiler
doesn't care.)

Pete Gontier

unread,
Nov 18, 1993, 7:59:31 PM11/18/93
to

I wrote:

>>But, silly me, I consider
>>indentation to be part of the language, the same part of the language
>>curly-braces belong in and regular parens do not. Probably people

d88...@dront.nada.kth.se (Jon Wätte) writes:

>It's not how you indent code, it's what the token stream actually
>reads like that's important.

I think they're both important. One is for the compiler, the other is
for the programmer.

What does this have to do with exceptions? :-)

I wrote:

>>Firstly, we were talking about C++, not C, and secondly, YES, I agree
>>with you, this will definitely be the downfall of C++. In the

Jon wrote:

>Okay, I'll quit this thread now. Thank you. And remember to catch
>all those exceptions you throw :-)

There was not enough context here for me to see the humor, but I
appreciate the smiley.

Jon wrote:

>>>WRONG! You may get an error because of, say, AppleShare permissions
>>>or NFS server timeout, while there is still more files to go.

I wrote:

>>And I would call those stopping conditions. Maybe if I were doing a

Jon wrote:

>Huh? Just because some exotic permission scheme doesn't let you
>get info about one folder in a folder, but would let you get
>info about the next file in the folder, you're skipping that
>next file?

It's a truly bizarre system whose permissions deny "get info" access
to some files in a directory and not others. I can see how opening
the files would be an issue for this, but not just getting info.
You're the groupware guy -- is there really a file system that works
like this? Either way, let's consider this argument closed -- if
there is such a weird file system, then I've been wrong all this time
out of ignorance, and if there is not such a weird file system, then
we must've been arguing about two different things and we can stop.

>>>1) Pete gets the number of files in the folder, let's say it's 2.
>>>2) Pete gets info on file 1 in the folder, call it "A"
>>>3) Mary saves her new document in the folder, call it "B"
>>>4) Pete gets info on file 2 in the folder, file "B"
>>>5) Pete now stops, since he thinks there are only two files in the
>>> folder, even though the file "C" which was in there when he
>>> started looking still is un-looked-upon.

>>If I'm a word processor or
>>most other sorts of apps, I think this problem is acceptably left
>>unsolved.

>You could get a job with certain parts of Microsoft within minutes
>of writing that. That's not a compliment.

As I said in the other message, where's the opportunity for data
loss? I'm thinking the most mission-critical thing a word processor
could possibly do in this context is fail to list a file in a popup
menu or something. And that popup menu is going to become out of date
no matter what -- so why take the double system call hit? If you
really want to solve the problem, you are going to have to be
continuously scanning to update the menu anyway.

Reid Ellis

unread,
Nov 18, 1993, 1:02:25 PM11/18/93
to
Pete Gontier <gur...@netcom.com> writes:
|I'll abstract a file, sure, but if it gets closed when it's already
|been closed or never been opened, generally I call a routine called
|Low_Panic which calls DebugStr and ExitToShell.

I would hope that you only do this while you're debugging code. For
something like closing a file that is already closed or never opened,
I would do something like:

statusCode XXFile::close()
{
DB_ASSERT(isOpen());
if(isOpen()) close();
return sSuccess;
}

with DB_ASSERT() being a macro that is compiled out of production
code. Note that even if it's not open, it returns success because the
file can now be considered "closed". If it wasn't open, it was a
programming error, not a runtime condition that can't be handled or
something.

It *always* pays to code defensively.

Reid

Glenn Reid

unread,
Nov 19, 1993, 12:40:25 AM11/19/93
to
Pete Gontier writes

> I'm not so good with operator precedence, either, which is why I tend
> to over-use parens.

Agreed. There is another reason. Since lots of people are "not too sure"
of operator precedence, this means (in the real world) that there are
compilers that implement them differently, because the people who wrote
the compilers might not be "too sure" either. Parens disambiguate this,
and keep you from relying on the compiler-writers to keep bugs out of
your code. If the bug is in your code, it doesn't matter whether the
compiler was at fault: it's still your bug.

You could say that if I were consistent, I would
> also tend to over-use curly-braces. But, silly me, I consider
> indentation to be part of the language, the same part of the language
> curly-braces belong in and regular parens do not. Probably people
> will be up in arms about the idea of indentation being part of the
> language and somebody will quote the author of 'make' saying it was
> stupid to use indentation as part of the language. Oh well.

Well, indentation isn't part of the language. White space separates
tokens. That's all. You know that, of course, but you offer it as
a justification for laziness, sloppiness, or whatever else.

Curly braces compile down into nothingness. You get the same compiled
code either way. You get more maintainable code and code that's easier
to read if you use them consistently. Code with too many parentheses
also compiles down into the same bits, so it's exactly the same argument,
except that it's a little easier to deal with lack of {} than lack of ().

[explanation of missing files in a directory elided]


> This doesn't sound too bad to me. If I'm groupware or a utility, I
> need to handle this situation correctly. If I'm a word processor or
> most other sorts of apps, I think this problem is acceptably left
> unsolved. Unless you can come up with a way data could be lost. I
> don't think that's unlikely, I just haven't seen it yet.

Nonsense. You proposed an algorithm that was inferior because it had
the possibility of actually not doing the right thing (it might miss
new files that were created in the window of time between when you got
the file count and when you enumerated the files). The algorithm of
which you made fun and your algorithm differed ONLY in that you explicitly
got the file count before you started and enumerated. Both algorithms
claimed to terminate on error. That means that the original algorithm
works in all cases, but yours doesn't. The only advantage to yours is
that it might terminate without the error condition (instead it terminates
at your loop constraint). Big deal. Your algorithm isn't as good,
which is what was being pointed out to you. Saying it's "acceptably left
unsolved" is simply a cop-out. The problem is TRIVIALLY SOLVED by not
using the algorithm you posed as an example to support your coding style.

> Granted, but I don't think most apps need to care anyway. Prove me
> wrong -- I'm eager to see the light.

Pete, you say many wise and useful things on the net. The lesson to be
learned, I think, is that dogma in coding style is not as worthy as
algorithms that adapt better to the fast-changing world of networking,
concurrency, and so forth. You have focused repeated on old-fashioned
defensive coding and you've said that errors are the FIRST thing that
you think about. The "light" that you are eager to see is that the
algorithm should be what you think about first, and the algorithm should
inherently allow for "error conditions" that are time-related and not
simply bad results returned from function calls. In the example you
gave about enumerating files in a directory, you simply don't allow for
a very serious error possibility: an error in timing that could occur
in a highly networked environment. It's also an error that has a trivial
workaround, as was pointed out.

I don't mean to take issue with you directly, but you're sticking hard
to your point, and I figured I had to try hard to pry you loose from it :-)

Steven Lane

unread,
Nov 19, 1993, 5:24:24 PM11/19/93
to
Heaven knows I'm no guru, but there are a couple of remarks I do want
to make on this post.

In article <13...@rtbrain.rightbrain.com> gl...@rightbrain.com writes:
>Pete Gontier writes
>> I'm not so good with operator precedence, either, which is why I tend
>> to over-use parens.
>
>Agreed. There is another reason. Since lots of people are "not too sure"
>of operator precedence, this means (in the real world) that there are
>compilers that implement them differently, because the people who wrote
>the compilers might not be "too sure" either. Parens disambiguate this,

At least in the case of C there's a standard, in which operator
precedence is (as far as I know) rigorously defined. An ANSI-compliant
compiler isn't allowed to be "not too sure" about operator precedence.

>
> You could say that if I were consistent, I would
>> also tend to over-use curly-braces. But, silly me, I consider

>Curly braces compile down into nothingness. You get the same compiled


>code either way. You get more maintainable code and code that's easier

This isn't true, but you may not have written what you meant. Curly
braces frequently compile to branch statements. For example

if( condition)
{
DoThingOne();
DoThingTwo();
}
DoThingThree();

will not produce the same compiled code regardless of whether there
are curly braces or not. You know this.

>Nonsense. You proposed an algorithm that was inferior because it had
>the possibility of actually not doing the right thing (it might miss
>new files that were created in the window of time between when you got
>the file count and when you enumerated the files). The algorithm of
>which you made fun and your algorithm differed ONLY in that you explicitly
>got the file count before you started and enumerated. Both algorithms
>claimed to terminate on error. That means that the original algorithm
>works in all cases, but yours doesn't. The only advantage to yours is
>that it might terminate without the error condition (instead it terminates
>at your loop constraint). Big deal. Your algorithm isn't as good,
>which is what was being pointed out to you. Saying it's "acceptably left
>unsolved" is simply a cop-out. The problem is TRIVIALLY SOLVED by not
>using the algorithm you posed as an example to support your coding style.

If I'm remembering correctly the other algorithm was to iterate until
hitting a "file not found" condition. Pete proposed to avoid that by
getting the file count first. Neither of these directly fixes the
potential problems from concurrency. Other solutions were proposed,
but I don;t think it's fair to say the problem is "trivially solved,"
nor is it the case that the original algorithm (if by this we're
talking about the same "go till file not found" method) works in all
cases. It's actually not such an easy problem to solve.
--
----
Steve Lane
University of Chicago, Department of History
sg...@midway.uchicago.edu

Ron_Hu...@bmug.org

unread,
Nov 18, 1993, 11:16:00 PM11/18/93
to
Pete Gontier,gur...@netcom.com writes:

>But, silly me, I consider

>indentation to be part of the language

Finally! Something in which we are in total accord!

I once saw a (toy) language that dispensed with begin/end and their
funny-character equivalents in favor of indentation. The body of
an 'if' statement consisted of all the following lines that were
indented further than the if. The matching 'else', if any, was the
one at the same indentation as the 'if', and its body likewise
extended across all statements indented relative to it. Similar
rules applied to other compound statements.

I looked at that, and thought about it, and liked it. I am now very
strict about indentation in all my code. I write as if the indentation
defined the structure. The compiler, of course, needs the curly braces
or begin/end, but to me those are superfluous syntax added merely to
please the compiler. Since I don't use them to tell me the structure,
I relegate them to the end of the line. I (almost) never put a curly
brace on a line by itself.

Not only does that make my code more compact (because it uses fewer
lines, allowing more 'real' code to fit on the screen), and makes the
code lots easier to read (because the indentation tells you the
structure immediately, and is easier for the human eye to see than
those funny characters), but, suprisingly enough, it actually makes
it easier to pin down those rare cases of missing/mismatched braces.
If you are careful not merely to indent, but to indent according to
very strict rules, it is a simple matter to check whether a line is
indented correctly by comparing it with the immediately preceding line.
This is a purely local test. You never have to look back more than one
line. And the place where the indentation seems wrong is the place where
the braces are wrong.

Of course, to make it work, you must always indicate when the next line
is to be indented. In particular, if an 'if' does not fit entirely
on one line, its body *must* be enclosed in curly braces:

if (...) { // '{' says next line must be indented
..body.. } // '}' says next line must be outdented

even if ..body.. is a single statement. (If you want to relax this
rule, then checking indentation requires looking at two previous
lines instead of one. The amount of look-back is still limited.)

BTW, I do consider parentheses and braces to be the same kind of
syntactic item (just as I consider statements and expressions to be
the same kind of syntactic item). If an expression does not fit on a
line, I enclose it in parentheses and indent the part that doesn't fit:

aVariable = (aLongExpression // Unbalanced '(' forces indentation
+ anotherLongExpression); // Unbalanced ')' ends indentation

Then parenthesis counting tells me if I have the indentation right,
and the indentation tells me if I have the parentheses right.

-Ron Hunsinger


Nevin Liber

unread,
Nov 19, 1993, 10:25:13 PM11/19/93
to
In article <13...@rtbrain.rightbrain.com>,

Glenn Reid <gl...@rightbrain.com> wrote:
>Pete Gontier writes
>> I'm not so good with operator precedence, either, which is why I tend
>> to over-use parens.
>
>Agreed. There is another reason. Since lots of people are "not too sure"
>of operator precedence, this means (in the real world) that there are
>compilers that implement them differently, because the people who wrote
>the compilers might not be "too sure" either.

I sincerely doubt that (at least any compiler that has come out within
the last decade). It is fairly simple to sit down with the ANSI C
standard (or even K&R 1, for that matter), and write a compiler that
gets the precedence correct. If they can't get this right, I wouldn't
trust that compiler to get anything right.

If you name ONE production-quality compiler (that someone actually uses
for something real, and their code would produce the same results under
something like GCC), that got the precedence wrong, I'll apologize.

From a developer's point of view (where one of the main purposes is to
keep the code maintainable), I'm all for using redundant parentheses,
brackets, etc. From a compiler writer's point of view, this is far too
simple a thing to implement to be wrong (by the time the compiler
ships).
--
Nevin ":-)" Liber ne...@cs.arizona.edu (602) 293-2799

Jon Wätte

unread,
Nov 20, 1993, 9:42:20 AM11/20/93
to

>If I'm remembering correctly the other algorithm was to iterate until
>hitting a "file not found" condition. Pete proposed to avoid that by
>getting the file count first. Neither of these directly fixes the
>potential problems from concurrency. Other solutions were proposed,
>but I don;t think it's fair to say the problem is "trivially solved,"
>nor is it the case that the original algorithm (if by this we're

No; the final candidate for the original algorithm worked in
tandem, checking the last file right AFTER you check the next one,
to make sure it's still the same file.


--
-- Jon W{tte, h...@nada.kth.se, Mac Hacker Deluxe --

I offer a pot of gold for Gates' head on a pole.
Naah - bashing Microsoft is "out." Love, Peace and Understanding!

Bruce Hoult

unread,
Nov 20, 1993, 7:50:22 AM11/20/93
to
gur...@netcom.com (Pete Gontier) writes:
> TECO! My God! You guys are getting to be as rare as WWII vets!
> I'm very impressed you've come to Macintosh. Congrats. :-)

I thought *everyone* had used TECO!

I've never done a lot in TECO itself, but in 1985/86 I did a heap of
stuff on a TECO clone called "speed" on Data General's AOS/VS.

Pete Gontier

unread,
Nov 20, 1993, 10:27:15 PM11/20/93
to
Reid Ellis <r...@Alias.com> writes:

>Pete Gontier <gur...@netcom.com> writes:
>|I'll abstract a file, sure, but if it gets closed when it's already
>|been closed or never been opened, generally I call a routine called
>|Low_Panic which calls DebugStr and ExitToShell.

>I would hope that you only do this while you're debugging code. For
>something like closing a file that is already closed or never opened,
>I would do something like:

> statusCode XXFile::close()
> {
> DB_ASSERT(isOpen());
> if(isOpen()) close();
> return sSuccess;
> }

>with DB_ASSERT() being a macro that is compiled out of production
>code.

Yes, I think this is a good idea whether one does this with a macro
or an inline function or a real function, depending on what's
needed.

>Note that even if it's not open, it returns success because the
>file can now be considered "closed". If it wasn't open, it was a
>programming error, not a runtime condition that can't be handled or
>something. It *always* pays to code defensively.

I'm not sure what I don't like about this, but I don't like it.
Either the program is debugged or it's not. To be a little more
specific: if code is this defensive everywhere, you start getting
performance hits. Probably not in this example, but say you're
dealing with deleting a record in a big database. Do you leave the
version which checks for the presence of the record before deleting
it in the production build? How about the version of the high-level
function which verifies relation integrity before starting the
process of deleting some set of relations and records, etc.?

Pete Gontier

unread,
Nov 21, 1993, 6:36:30 PM11/21/93
to
Netcom appears to have been having newsfeed problems lately, so I am
forced to resort to this (sorry Steve):

sg...@kimbark.uchicago.edu (Steven Lane) quotes:

>>Nonsense. You proposed an algorithm that was inferior because it had
>>the possibility of actually not doing the right thing (it might miss
>>new files that were created in the window of time between when you got
>>the file count and when you enumerated the files).

It might have been inferior in its concurrency handling. However, my
purpose in posting the algorithm was not to present it as a solution
to the concurrency problem. It was to point out that confusing errors
with data is a Bad Thing. Read on.

>>The algorithm of which you made fun and your algorithm differed ONLY
>>in that you explicitly got the file count before you started and
>>enumerated. Both algorithms claimed to terminate on error.

No. The first algorithm claimed to terminate when it had run out of
files to enumerate. In fact, it terminated when it got fnfErr and
then assumed that meant it had run out of files to enumerate. My
version makes no such assumptions.

>>That means that the original algorithm works in all cases, but
>>yours doesn't.

You didn't explain this assertion enough for me to understand it.

>>The only advantage to yours is that it might terminate without the
>>error condition (instead it terminates at your loop constraint). Big
>>deal.

Actually, I do think it is a "big deal". If you don't think so,
that's your choice. All I can do is post what I think.

>>Your algorithm isn't as good, which is what was being pointed out to
>>you.

Need some help getting that chip off your shoulder?

>>Saying it's "acceptably left unsolved" is simply a cop-out.

In a way, you are right, whoever you are. My mistake was that I
attmepted to address Jon's points about concurrency when that
actually had nothing to do with what I intended to communicate with
my first post in this thread. If you are concerned with concurrency,
then yes, my answer was a cop-out. At the time, I was concerned with
avoiding looking like a fool, so I of course ended up looking like a
fool.

I do wish someone would mail me the full text of this article Steve
quoted.

Glenn Reid

unread,
Nov 21, 1993, 7:32:12 PM11/21/93
to
Nevin Liber writes

> In article <13...@rtbrain.rightbrain.com>,
> Glenn Reid <gl...@rightbrain.com> wrote:
> >Pete Gontier writes
> >> I'm not so good with operator precedence, either, which is why I tend
> >> to over-use parens.
> >
> >Agreed. There is another reason. Since lots of people are "not too sure"
> >of operator precedence, this means (in the real world) that there are
> >compilers that implement them differently, because the people who wrote
> >the compilers might not be "too sure" either.
>
> I sincerely doubt that (at least any compiler that has come out within
> the last decade). It is fairly simple to sit down with the ANSI C
> standard (or even K&R 1, for that matter), and write a compiler that
> gets the precedence correct. If they can't get this right, I wouldn't
> trust that compiler to get anything right.
>
> If you name ONE production-quality compiler (that someone actually uses
> for something real, and their code would produce the same results under
> something like GCC), that got the precedence wrong, I'll apologize.

Okay, so this is a bad example to illustrate my point. Compilers are
probably always going to get the precedence of operators right.

Let me put it another way. Having been in the "real world" of programming
for a long time, I've learned that everybody is incredibly busy and that
there is a staggeringly enormous amount to know. Too much, in fact, for
any single person to know. Therefore all of us have become de facto
"specialists". One guy knows graphic fill algorithms that won't miss
pixels or over-scan under the wrong conditions. Somebody else knows how
to implement page fault algorithms in a multi-tasking operating system.
Yet another woman can write microcode for processor caches. And still
some others have made the C language into a hobby and have mastered all
the strange little syntactic constructions that are possible, including
burying funtion calls inside conditionals, using the "?" operator, and
whatever else.

My point, poorly made, was that the FEWER things you rely on other
professionals to have down cold, the BETTER off you'll be, and the more
reliable your code will be. The world SHOULD be a lot better than it IS,
so I've learned not to rely on people having done the right thing,
especially if it's an area where obviously very talented and capable
people like Pete Gontier say they're "not too sure". There's a reason
he's "not too sure", and it's the same reason that I'm not too sure,
I think, which is that it's a waste of time to be sure about operator
precedence. Parentheses disambiguate the code once and forever, and
then you can read your own code a year later, someone else maintaining
the code can read it, and the code will probably continue to work for
a long time. So we put in parentheses to indicate PRECISELY what we
wish to happen, rather than relying on operator precedence to see to it
that what we want to happen actually does.

The same thing applies to curly braces. Those of you who think that
white space is part of the language can certainly write code that
compiles and executes correctly. So can I. But you're relying on the
rest of us to have learned your disciplines, which means that you're
writing slightly less maintainable code. That's your prerogative, of
course, and maybe it's a good trade-off because you fit more code in
a screenful. That's not for me to say.

Anyway, thanks for the followups that pointed out that I'm an idiot
to think that compilers might not get operator precedence right. Of
course they will. But I hope I've made a different point reasonably,
which is that adopting a coding style that reduces ambiguity and/or
misunderstood code is as much a part of "defensive" coding as checking
error conditions religiously. And, in my opinion, using extra parentheses
instead of operator precedence is a good idea, as are "extra" curly
braces.

Glenn Reid

unread,
Nov 21, 1993, 7:41:55 PM11/21/93
to
Steven Lane writes

> Heaven knows I'm no guru, but there are a couple of remarks I do want
> to make on this post.

[stuff about operator precedence and compilers omitted; see separate
followup]

> Glenn Reid writes:


> > Pete Gontier writes:
> > You could say that if I were consistent, I would
> >> also tend to over-use curly-braces. But, silly me, I consider
>
> >Curly braces compile down into nothingness. You get the same compiled
> >code either way. You get more maintainable code and code that's easier
>
> This isn't true, but you may not have written what you meant. Curly
> braces frequently compile to branch statements. For example
>
> if( condition)
> {
> DoThingOne();
> DoThingTwo();
> }
> DoThingThree();
>
> will not produce the same compiled code regardless of whether there
> are curly braces or not. You know this.

Right. What I was saying (or trying to) is that so-called "extra"
curly braces compile down into nothing. For example, some would write:

if ( condition )
DoThingOne();
else
DoThingTwo();

I advocate "extra" curly braces to make sure that the structure is clear,
and to make the code more maintainable:

if ( condition ) {
DoThingOne();
} else {
DoThingTwo();
}

What I was saying before is that the compiler will generate the same
code with or without the curly braces in these two examples.

It costs you one vertical line (with the close-} on it), but it's much
easier to deal with, in my opinion. In fact, when I write code like
this I ALWAYS write the entire conditional first, then go back and
fill in the clauses. As such, I never get unbalanced brace problems.
I'd type this entire construct first, then 'open up' the braces and
put statements inside them afterwards:

if ( condition ) {
} else {
}

What I was saying before is that the compiler will generate the same
code with or without the curly braces in these two examples.

[stuff about file enumeration algorithms elided]

> If I'm remembering correctly the other algorithm was to iterate until
> hitting a "file not found" condition. Pete proposed to avoid that by
> getting the file count first. Neither of these directly fixes the
> potential problems from concurrency. Other solutions were proposed,
> but I don;t think it's fair to say the problem is "trivially solved,"
> nor is it the case that the original algorithm (if by this we're
> talking about the same "go till file not found" method) works in all
> cases. It's actually not such an easy problem to solve.

I'll grant that is not such an easy problem to solve. However, Pete
was defending his algorithm, and said that he "couldn't see that any
data could be lost", which is why I took issue with it, because I could
see how data could be lost.

Algorithms such as this get especially important when the processing
on each file takes time. For example, if you were enumerating all the
files in a directory and doing some kind of complicated image rendering
on them, potentially taking hours for each file, it is easy to see how
files could be added and deleted from the directory during the iteration
of the loop. To blithely say "I can't see how any data could be lost"
misses that point.

The best solution is probably to check the date stamp on the directory
itself each time around the loop, and to be willing either to issue an
alert and/or make another pass through the directory if you discover
that its contents have changed during the execution of the loop.

Pete Gontier

unread,
Nov 22, 1993, 2:10:42 PM11/22/93
to
gl...@rightbrain.com (Glenn Reid) writes:

>Well, indentation isn't part of the language. White space separates
>tokens. That's all. You know that, of course, but you offer it as
>a justification for laziness, sloppiness, or whatever else.

>Curly braces compile down into nothingness. You get the same compiled
>code either way. You get more maintainable code and code that's easier
>to read if you use them consistently. Code with too many parentheses
>also compiles down into the same bits, so it's exactly the same argument,
>except that it's a little easier to deal with lack of {} than lack of ().

This is like winning a murder defense on a technicality. Sure, what
you say is true, but that doesn't make it useful. Of course I know
the compiler doesn't care about white space. That's no reason to
prevent me from using it as a tool. Should I use it as a tool to the
exclusion of things the compiler does recognize, like curly braces?
Dunno. I do know that using my own brain to manipulate white space
and other code typography works better than relying on curly braces.
And as a side effect, I get more code to fit on a page. Woo woo.
What I resent is when someone who chooses not to use their own brain
and rely on the legalism of putting curly braces everywhere wants
*me* to do the same thing. (Hey, you're at rightbrain.com; you ought
to sympathize... :-)

Parens address a different problem, really. Expressions, for me, are
harder to read than groups of expressions. Probably we are trained to
look at expressions pretty much like we look at "natural" language,
which in the West tends to flow from left to right, with as much text
on one line as possible. So we tend to batch operations into
one-line expressions. I don't think that's liable to change any time
soon. This makes expresions more dense and complex than groups of
expressions. So I don;t think parens and curly braces can be argued
about in parallel here, but I haven't quite yet completely formulated
the rationale to go with my intuition.

>algorithm should be what you think about first, and the algorithm should
>inherently allow for "error conditions" that are time-related and not
>simply bad results returned from function calls. In the example you
>gave about enumerating files in a directory, you simply don't allow for
>a very serious error possibility: an error in timing that could occur
>in a highly networked environment. It's also an error that has a trivial
>workaround, as was pointed out.

Trivial in terms of algorithmic "correctness". But as I posted
elsehwere (in fact, in response to a quote of your post -- which I
saw first, sorry), should a word processor take twice as long to
bring up a dialog so that it can "correctly" produce a list of files
that it is going to have to be constantly updating anyway? I mean,
"correctness" is a more complex issue than code. "Correctness" also
has to do with user experience -- what's the use of compiling a
"correct" list of files if you don't also update it in real time? And
if you're going to update it in real time, why take a x2 performance
hit each time through?

Now, if you're talking about groupware, like Jon is writing, I can
see the need for this "trivial" workaround (which I of course don't
see as trivial, but that's the entirely reasonable price Jon pays for
writing groupware).

Pete Gontier

unread,
Nov 22, 1993, 2:25:48 PM11/22/93
to
gl...@rightbrain.com (Glenn Reid) writes:

>I'll grant that is not such an easy problem to solve. However, Pete
>was defending his algorithm, and said that he "couldn't see that any
>data could be lost", which is why I took issue with it, because I could
>see how data could be lost.

Data could fail to be read, true. But I don't think data could be
lost. I don't think allowing users to rely on adding a file to a
directory in the middle of a scan is good practice. They aren't going
to understand the concurrency issues as well as programmers.
Besides, what are you going to do when they do add a file? Say you're
scanning a directory with B, C, and D in it. While you're reading D,
somebody drops A into the folder. When you're finished reading D, do
you go check to see if your scan is still valid? What's this notion
of validity in this context? Do you try to add A to your scan
somehow? Do you put up an error dialog?

My point is that this problem is vastly more complicated than error
checking individual system calls, which is what my original post was
about. If you are a word processor and you need a simple scan of a
directory (and yes, this example was chosen to constrain out
concurrency issues), I maintain it's still much better to get a file
count and loop against it than to behave as if you did whenever you
encounter fnfErr.

Anyway, the real point was about being careful not to confuse errors
with data. If you want to really constrain this topic for a relevant
discussion, assume the scan always happens on a local volume. Or come
up with a RAM-based example.

>The best solution is probably to check the date stamp on the directory
>itself each time around the loop, and to be willing either to issue an
>alert and/or make another pass through the directory if you discover
>that its contents have changed during the execution of the loop.

This is not a bad idea, assuming you can rely on the directory's
timestamp changing. I'm not too sure about this, having never
attempted to employ this strategy.

Kevin Tsuji

unread,
Nov 22, 1993, 3:07:51 PM11/22/93
to

Thanks for all your responses regarding my previous post. I was wondering
if there was a programming shell that will generate C code specifically for
the different types of Macintosh applications, i.e. desk accessories,
extensions, print drivers, etc. One that can even query a programmer about
the specific input/output of his/her program and then generate code in Symantec

Erik Svensson FOA2

unread,
Nov 16, 1993, 8:13:34 AM11/16/93
to
d88...@dront.nada.kth.se (Jon Wätte) writes:

>In <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

>>Hey, bad programmers write bad code. The language is irrelevant.
>>Fire them if they do it too often.

>That's not always the good thing to do. The German Nazi army always
>shot the operative in charge of failed operations, and, as a result,
>depleted their own human capital and lost many skilled and caluable
>leaders.

Not that it has anything to do with programming, but this is false.
The russians had a tendency to do this but not the germans.

cheers

--
Erik Svensson
Research Officer National Defense Research Establishment (FOA)
Guided Weapons Division Stockholm, Sweden er...@sto.foa.se

"The problem with the future is that it keeps turning into the present."
-- Hobbes

Dana S Emery

unread,
Nov 24, 1993, 3:46:34 PM11/24/93
to
In article <13...@rtbrain.rightbrain.com>, gl...@rightbrain.com (Glenn Reid)
wrote:
>
[general discussion of how best to ensure all files in folder get
processed]

> Algorithms such as this get especially important when the processing
> on each file takes time. For example, if you were enumerating all the
> files in a directory and doing some kind of complicated image rendering
> on them, potentially taking hours for each file, it is easy to see how
> files could be added and deleted from the directory during the iteration
> of the loop. To blithely say "I can't see how any data could be lost"
> misses that point.
>
> The best solution is probably to check the date stamp on the directory
> itself each time around the loop, and to be willing either to issue an
> alert and/or make another pass through the directory if you discover
> that its contents have changed during the execution of the loop.

The users expectations should be considered, if the user expects the
folder to have dynamic contents, then yes, a dynamic respnse is needed.

I can see situations when the user would rather you take a snapshot of
the folder at the time the command was given, and that only such files
as were seen in the snapshot should be processed.

For the dynamic case I would propose a different solution: maintain a
list of files already processed, and filter candidates for processing
thru that list just prior to starting on a new file. Some files could
volatize before being seen, but lacking any semaphores with the File
Manager/AppleShare/... to tell you when a new file exists you cant be
sure of seeing all transient files in any case, so dont worry about it.
Eventually the filter list will leave 0 candidates, so you either
quit or go idle.
--

Dana S Emery <de...@umail.umd.edu>

eric larson

unread,
Nov 24, 1993, 10:51:05 PM11/24/93
to
# A language that prevents unclear expression is too limited to to
# express much.

A rich language certainly provides an opportunity for unclear expression.
However I disagree strongly with the idea that it is good practice to use this
richness to express programs in an 'idiomatic' form. As is true for
expressions in natural languages, the best writing is free of idiom, or the
confusion caused by inappropriate use of vocabulary. The value of a rich
vocabulary is in its ability to express a broad range of ideas clearly, not to
hide common ideas behind a wall of obfuscation.

Fowler's rules of Modern English Usage are also appropriate to programming
style.


--
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
eric larson - Internet: eric....@f620.n2605.z1.fidonet.org

Vincent LADEUIL clt53aa

unread,
Nov 26, 1993, 4:33:02 AM11/26/93
to
In article <gurgleCG...@netcom.com> gur...@netcom.com (Pete Gontier) writes:

I'm not sure what I don't like about this, but I don't like it.
Either the program is debugged or it's not. To be a little more
specific: if code is this defensive everywhere, you start getting
performance hits. Probably not in this example, but say you're
dealing with deleting a record in a big database. Do you leave the
version which checks for the presence of the record before deleting
it in the production build? How about the version of the high-level
function which verifies relation integrity before starting the
process of deleting some set of relations and records, etc.?
--
Pete Gontier // EC Technology // gur...@netcom.com

I program defensively since 89.
I use two kinds of assertions : strong and weak.
Strong assertions are used for debugging purposes only, weak assertions
stay in the definite code.
I use assertions to define the 'frontiers' of my programs.
My programs are designed to work inside the frontiers, outside
the behaviour is undefined ;-)

The ideas of frontiers and defensive programming are more
understandable if you define BUGS are your ennemies.
The prupose of assertions is to defende your programs
against bugs. Leave a hole in your frontiers and bugs will come in.
Always. (usual politic disclaimer, i speak of programs :-)

Here is one key point with assertions, i (the programmer)
decide whether or not i will treat or ignore some kinds of errors.
Often, during coding, i think : "I have to treat this error, but
i have no time just now so i will see later" and then i leave
a big hole and forget it. STOP. PUT AN ASSERTION. This way will
be noticed more quickly when my forgiven hole is invaded.

Here is a second point, i put an assertion each time i think :
"It's impossible", i put a assertion to verify it.

store an object in an array
... // something useful which don't remove the object
retrieve the object
assert_weak(object_found) // Fail if the object disappear in a black hole

About performances, i usually noticed 5 or 10% overhead for assertions.
With one exception for a language interpreter with intensive calculations
for which the overhead was 50%.
I consider it a very good price, considering the heavy reducing time
for finding bugs.

Generally my development cycle is :
1 - Think (50%)
2 - Write (30%)
3 - Compile and correct syntax errors or misunterstanding of programming
language (C++ is sometimes really amazing 8-() (10%)
4 - Running and debugging (10% believe it or not that's my reality)
Here the cycle becomes :
- run
- wait for an assertion within the debugger (gdb trap abort, for the mac
I call Debugger() if an assertion failed)
- search explanation (Hey i have generally all variables involved
just under my fingers)
- correct the bug


Now a last point, the biggest problems i have encountered during debugging
was always because the point where the bug is revealed if far from the point
it really is. Putting assertions have two effects against that :
- my programs stops earlier when detecting an incohence
- it sometimes revealed data corruption (stack smashed or fandango on core)
which are caused by a bug in correct code.

Assertions, We Want You For Solid Code.

Vincent.

0 new messages