Alexis ARNAUD e-mail : a-ar...@bat710.univ-lyon1.fr
Batiment 710, UFR Informatique, Universite Claude Bernard Lyon 1
43 boulevard du 11 novembre 1918, 69622 Villeurbanne cedex, France
Like most generalizations, it's wrong. CATCH and THROW can be abused, like
GOTO's in many languages. Many uses of CATCH and THROW in earlier Lisps
were for error handling, which should now be done using the condition and
restart mechanisms. Also, BLOCK and RETURN-FROM may be applicable in many
The main reason he may be saying they're "dirty" is because they're
dynamic, not lexical like BLOCK/RETURN-FROM. Catch blocks are therefore
like special variable bindings -- an implicit part of an interface that's
not obviously apparent in the calling sequence.
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
>at my university, a computer scientist says the usage
>of throw/catch in Lisp constitutes "dirty programming".
>Could someone who has been programming in Lisp for a long
>time tell me if she's right ? If it is so, why should it
>be considered as "dirty programming" ?
>Thanks in advance.
CATCH and THROW have their place, but starting programmers will
probably not have need to use them often. The conditions system now
should replace these operators for exception handling uses, and macros
and operators like UNWIND-PROTECT, WITH-OPEN-FILE, etc. probably use
these operators internally.
I think of CATCH and THROW a bit like GO. They are primitive
operations on which all non-local transfer of control can easily be
built. They can be used to do things like modify the language (create
a new language). For most programming, I think you can stick with the
conditions system, provided your CL implementation supports this (it's
part of the ANSI spec). BTW, Corman Lisp 1.3 (and PowerLisp) do not
support condtions. Corman Lisp 1.4 (available soon) does support them.
I think Lisp's CATCH and THROW were at least partly the inspiration
for the exception handling of C++ and Java, which also use the names
catch and throw, for similar, albeit somewhat more complex, purposes.
Catch and throw based exception handling is the state of the art in
these other languages. As usual, Lisp has left them in the dust with
the far more expressive and better-designed condition architecture.
The basic important elements of non-local control are boiled down to
their fundamentals: Conditions, Restarts, Handlers. Each one becomes a
first-class object. I anticipate that it will take another 10 years or
so for this advance to make it into some mainstream language (unless
of course some Lisp manages to make it to the mainstream before then).
Using throw/catch when you could you block/return-from is probably
suboptimal, but otherwise throw/catch is a perfectly fine way of doing
dynamic transfer of control. The other possibility would be to use the
Actually, Smalltalk has had this model for about 20 years with the only
difference being that you might have a slightly more flexible set of restart
options (Smalltalk has return to the throwing context, return to the
catching context, throw a different exception, continue computation within
the context of the handler - although these may be the ones available in the
Lisp system, as well). The terminology is different, with "conditions"
corresponding to subclasses of the class Exception in the Smalltalk world.
And each exception is an object which can hold any sort of data. The
performance of exceptions is quite good in most implementations as well
(generally faster than in C++, although that's damning with quite faint
praise). Of course, whether or not one considers Smalltalk to be a
"mainstream language" is debatable. And you're probably right about the
primitive languages taking at least another 10 or so to catch up...
> Roger Corman <ro...@xippix.com> wrote in message
> > The basic important elements of non-local control are boiled down to
> > their fundamentals: Conditions, Restarts, Handlers. Each one becomes a
> > first-class object. I anticipate that it will take another 10 years or
> > so for this advance to make it into some mainstream language (unless
> > of course some Lisp manages to make it to the mainstream before then).
> Actually, Smalltalk has had this model for about 20 years with the only
Well, the Lisp machine had these for decades, too.
Unfortunately the C++ crowd looked at the Lisp condition system
and they decided *not* to include continueable exceptions (exceptions
in C++ terminate the context - in Lisp handler gets called and
from there you can continue, if you like - in C++ you can't continue).
They asked some TI hackers and they told them that this was rarely needed.
How stupid. ;-) I use this facility all the time.
Common Lisp has a nice condition system.
The language MAINSAIL (machine independent SAIL) (yes, derived from
Stanford's 1970-era SAIL) has continuable exceptions also.
Please list (at least in prose) some of the USES you guys
have found for continuable exceptions -- I think there are
some pretty neat things you can do, "obvious" ONCE someone
has pointed them out!
(if you are going to talk about "continuations", please
at least define it somewhat -- I've never quite "gotten" them.)
it continues to puzzle me that people who are willing to accept that in
any field worth studying, there will always be a limit to how much _one_
person can know, for simple logistical reasons: it takes too much time to
learn it all, like much longer than a human lifetime. yet in computer
science, that which one person, typically less than 30 years of age,
doesn't know is somehow bad, unworthy, dirty, etc. I'm inclined to
believe that such people are inherently unable to deal with complexity
beyond their own immediate grasp, and as such should not be dealing with
computer science in the first place, since the whole field is all about
managing complexity far beyond direct human capabilities, despite the
evidence we see from dummies who want to learn "programming" in 21 days.
ask your "computer scientist" whether the use of exceptions is also bad.
while you're at it, ask her if the RETURN statement in C is dirty, too.
and if the problem is that GOTO's are bad, what about WHILE? WHILE is no
more than dressed-up version of GOTO. THROW and CATCH are similarly a
form of GOTO that are not only dressed up, they have university degrees.
I weigh in on the side of thow/catch being "dirty". I also
weigh in on the presumably more controversial side of
even return-from as being "dirty", not to mention a
go outside of the originating tagbody.
The "dirtiness" arises partly from the lack of reusability of the resulting
code. It also arises because all the intervening unwind-protect code sequences
must be executed "during" the non-local transfer, EVEN if the user doesn't
THINK he wrote any (they are used in macros, and something akin to them must be
used to "unbind" the bindings of special variables (in almost all
implementations that support special variables)).
A "clean" piece of code in any language should, ideally, never, EVER, for ANY
arguments whatsoever, do a non-local transfer of control,
of any kind. REASON: suppose you want to use that piece of code
to compute the "answer vector" to a suitable vector of "argument lists".
Any non-local transfer of control will mean that, unless "all" of the vector
elements properly "work", even if the "answer" computed is "you gave me bad
arguments", then not all of the answers will be computed: you can't "split" the
control and "throw" for "some" elements, and not for others, UNLESS some
explicit futzing is done by every such user of the code. Even if the language
supports restarts, it is harder to use the original function.
Also, a "clean" piece of code should NEVER, EVER, either "run out of stack
space" (somebody neglected to put a check in for available stack space, at the
minimum) or run "forever" (somebody forgot to
put in a "max compute time allowed" piece of code), or run out of
memory (either ram or disk) and "crash" or "throw". Further, a
clean piece of code should NEVER, EVER get a divide by zero, or
floating point overflow "exception" or any similar situation.
The advantage to only writing "clean" pieces of code is also that it is often
easy to show a function is "clean" if all the functions it invokes are clean.
Done properly, the entire application then tells the user either the "right"
answer(s), or "because of ... the answer could not be computed." The second
answer is far better than a meaningless answer that the user is misled to
believe is right, or a compute loop, or a core dump, or other nasty behavior.
Just as "clean" code is far easier to build when building on "clean" parts,
using "dirty" code leads to more "dirty" code to try to fix up what wasn't done
right in the first place. Unfortunately, some popular languages, such as c and
c++, are full of "dirty" primitives, and it is hard to write "clean" code in
such languages. Lisp has far fewer "dirty" language elements, and even has a
substantial number of language elements that help hide low-level "dirtiness" so
the larger piece is "clean".
It is essentially impossible, in almost all popular languages, to program
without using any assignment statements. It is not even hard in any lisp that
supports tail-recursion, to write LARGE pieces of complicated code that do no
assignments (setq, setf, push, incf, etc) or destructive operations (nreverse,
rplaca, etc), but confine themselves to iteration via tail-recursion or
mapping, and data "modification" via binding.
Of course, inside the implementation, assignments will be happening,
but THOSE are not the ones (typically) that cause the bugs in applications. As
an "extremist", I consider most assignments to be
"dirty", even if they are "only" to local variables.
Again, the reason is the same as for throws, gotos, etc. Although the problem
is in some ways less severe, such assignments cause the compiler or other code
transformers to be very limited in the range of what they can do. Its too
hard, typically, in the presence of assignments, to be sure that a code
transform has done the "right" things. So the compiler generates inferior code.
In fact, the whole NOTION of "variables" is a bad idea: it constitutes a
"premature optimization" decision, made by the programmer, that MAY, or MAY
NOT, be a good idea. And its hard for even good compilers to undo the
optimization, and be sure they haven't damaged the semantics of the program.
Instead, consider the language of mathematics, where infinite series are
"normal", and a function which gets from "earlier" elements in one or more
series to later ones. Such a formulation is just as powerful and expressive.
But, it allows the compiler to decide how to "fold" the series elements into
registers, or memory, or actual vectors. Further, a change in the program can
then make the compiler revise its earlier decisions, even though almost all the
source code is identical.
Said again: in all of mathematics, except for those branches that study the
semantics of computer programs, there is no "assignment" notion. And its done
that way because it is easier for people to understand "timeless" (although
infinite) series, than assignment.
This may all sound "esoteric" and "theoretical" -- but, I assure you, it is
not. I have written many large programs, and debugged many written by others,
and have learned to hate the assignment statement, the uninitialized vector,
the unitialized variable, the non-local transfer of control, the "dirty"
primitives, and the "dirty" subsystems. Clean code is possible, and it also is
far less buggy, and far easier to add new features, or explain to someone else
how it works.
Compilers, (I've written my share) have to go to extrodinary lengths to
overcome the problems introduced by assignments and pointers. You will find, in
fact, that a good lisp compiler will normally, for the same application,
produce code that runs faster than c++ or c, because, primarily, both c and c++
pass many "pointers" around, and the compiler cannot normally figure out wheter
a value is altered by a call, or not, so it presumes it MIGHT have been, and
"reloads" the value. Although lisp is "ALL pointers", in fact, they are not
nearly so odious to a lisp compiler as the pointers in c and c++ are to them,
with the result that the usual application runs faster in lisp than in c or
Actually, lisp also wins over the others because there is more time for the
programmer to experiement with different data structures, since he spends less
time chasing screwball memory corruption problems, and other such time wasters.
Finally, all that said, there ARE good uses for throw/catch, etc. Things that
are of no value disappear from lisp, as lisp is still a growing, and changing
language, and will probably still be getting better a hundred years from now.
By then, the lisp library of "standard" functions, macros, types, methods,
constants, classes, combination methods, etc. will probably be in the tens of
thousands, and it may no longer be possible for any human programmer to ever
learn the entire language. But, if there are ever programs that put human
programmers out of a job, its a sure bet that those programs will be written in
a "lisp" language. (That is likely to incite flames from some quarters, so I'd
better add the usual IMO).
Rather than flame me with "your favorite language is better than lisp", give me
the name of the language and the url where I can get the syntax and semantics -
I'm very interested, if its one I don't know.
I'm not a "computer scientist". Just a programmer with a mathematics degree,
as "computer science" didn't exist when I went to college. My first paid job as
a programmer was 1965, but I had designed and built computing devices several
But I have some firm beliefs:
To be a good programmer, it is NECESSARY to:
1. read most of the CACM issues from 1950 through 1970.
2. read all of Donald Knuth's "The Art of Computer Programming".
3. learn lisp (preferably both common lisp and scheme),
and learn it well. (at least clos or flavors)
4. read everything by Edjkstra Dijskra.
5. learn algol (60 and 68), cobol, fortran, pl/1, c, c++, snoball,
smalltalk, prolog, and at least 3 different assembly languages.
Also, learn at least 6 other languages not in the above list.
6. learn denotational semantics.
7. At least once:
A. write an operating system
B. write a macro assembler
C. write a compiler
D. write a tight real-time program (say less than 100
machine instructions, average, available per intterrupt,
with the interrupt coming on "irregular" basis)
E. write a simulation of some complex system
F. write a database engine, with a query language, and
with transactions that either commit or roll back when the
power cord is removed while the database is in the middle
of one or more transactions.
G. write at least one set of numerical routines to compute,
efficiently and correctly, a set of operations that should
include several of the trigonometic functions and similar
H. write at least one "moderately large" program in assembly
language, or microcode, or similar low-level languge.
I. design at least one digital logic circuit, which is complex
enough to be a real programmable cpu, BUILD IT, and
use it to do SOME useful task, by writing some application
for the machine.
J. Port a large program from one platform to another (large: not
less than 250,000 lines of code, preferably larger)
K. Formally prove a small program correct, (small: say from 400
to a 1000 lines). By correct, it must be proved that: under
all possible inputs (for which there must be a formal spec
which delineates which are "legal" and which are "not", it must
always identify the illegal inputs with a suitable message, and
all the legal inputs must be correctly processed. You may
assume that the compiler/assmbler/linker/microcode are bug
free, and the hardware is working. You must prove that no
operation that is undefined, such as out of bounds subcripting,
or arithmetic overflow, divide by zero, etc. is possible, for any
legal input whatsoever. The proof cannot consist of "hand-
waving", but must consist, of a line-by-line analysis of the code,
proving that the operations in that line of code, given the things
proved to be true anytime that line is executed, will not cause
any "undefined" or "illegal behavior", and that furthermore, the
program will in fact halt, with any stack overflow, in a reasonable
time, for every possible input, having, as a whole done precisely
what the formal specification of its action says it should.
It will normally require altering the program to remove the bugs
to allow the proof to be completed. Thats ok, and the reason for
L. Design, and implement, for a real task, a complete language
that makes the total source code for the task smaller, when
both the langugage implementation code and the application
proper code lines are added together, than any known way to
do the same application without the language.
M. Improve the performance of some large program, written by
several people, by at least an order of magnitude, preferably
by more than two orders of magnitude.
N. Learn to "code inpect" other people's code. You should get
good enough at it, so that code that is "new", but has compiled
and run a few test cases, you can find not less than one bug
per 300 lines (6 pages of 50 lines per page) (not counting
blank or comment lines as code lines) just by reading the code.
If you cannot do that, then you may have picked code from the
wrong author, but it is more likely that you need more practice
doing code inspections. For relatively junior programmers, I
typically find about 1 bug every 10 lines of code. For almost
all senior programmers, I can still find a bug per hundred lines
of code. You need to practice this skill, as it is harder to do on
your own code than on other people's code. With practice, you
will get better. And, you will learn a set of "common mistakes"
to avoid. Ideally, if circumstances allow, the code inspection
should be done in a "group", so you can hear what the other
folks in the group saw, that you missed. But the group must be
gentle with the ego of the author: the idea is that we all want to
ship code that is not buggy to the customers, and this method
WORKS, and quickly raises the code quality of the entire group,
as anyone going into a code inspection, before he passes out
the code for review, does a personal review and typically finds
considerable bugs. Alternatively, the code author may be
allowed to "speak first" on any given section of code, saying
what bugs he has found since the code was distributed for
inspection (typically, a few days before the meeting).
An industrious programmer should be able to do many of these things before
getting his cs degree. And, once he hits a real job, if he is diligent, it
should not take longer than 10 years experience to reach this level. Some
rare folks pull it all off in college. But any attempt to avoid the work (like
skip reading those early CACM issues, what value or relevance could they
possibly have?) are not ever going to become really good programmers - they
will drift off into sales, or management, or some other career entirely.
Once an aspiring programmer has done all (or almost all) of the above, I
consider the programmer no longer a beginner, but already well along the
journey to expert programmer, and then on to (master, guru, wizard - pick your
favorite label - they do the "impossible" "routinely" and "on a schedule" ),
and then to teacher. Beyond teacher, I don't know what comes, as I have only
recently become a "teacher", and am by no means anywhere near even an "expert
teacher", let alone a "teacher of teachers."
I characterize a "fully developed expert programmer" as able to routinely name
several different "standard approaches" to the problem at hand, and to be so
familar with the literature that he will normally be able to say, when given a
very hard problem, in a rather short time (a couple of days, tops) that he
cannot solve the problem, and that furthermore, noone can, because any solution
that that problem would solve an NP complete problem, or the halting problem,
or other known or presumed impossible problems, as no one has yet stepped
forward with a solution to any NP complete problem, and a soultion to any would
solve them all). Also, expert programmers will routinely consider
issues other than just the program itself: such as who will be maintianing this
program, how soon do we need it, whats the corporate exposure if it
malfunctions, and many other issues that are not, per se, relevant to the
picking of the "best" program architecture, but VITAL in providing the program
that the comany needs. Experts will, when it is appropriate, create a
"quick-and-dirty" program. They will also steadfastly refuse to be pushed into
doing so by a manager who is not seeing the consequences and costs to the
company, when appropriate to do so. Experts will also routinely combine a few
"standard" ideas from areas that have traditionally had little to do the the
problem an hand, and use a blend of those techniques to do something that is
"beyond the state of the art". You have no real idea what approach an expert
might choose. You may rest assured that he will have considerd over a dozen,
and perhaps over three dozen different approaches before making the selection.
Occassionally, the expert will say that
the task is "impossible", and be able to point to the cs literature to
I characterize the (master/guru/wizard) programmer as "he does what the expert
proved was impossible, anyway". At first, only sporadically, and much to his
Eventually, he realizes he is doing this humanly-impposible job rather
often, and then strarts to figure out how he is doing it, and then, when
fully developed, can say with cofidence: he will be able to find the
crux of the problem (not necessarily implement the entire fix) in a fixed
time period, regardless of the problem, regardless of who has spent
how much time trying to solve it (as long as none of the people involved
were (master/guru/wizard) rank or above) in a fixed, short time period.
Further, he then knows that "all" problems he has encountered have
been cracked by a single standard method. although he is, particularly at
first, not too sure if it really is "all" problems.
My time period was a week. I never missed the deadline, ever.
I have NOT solved, yet, any NP complete problems. But I've always found a way
"around" that was "good enough" for the business situation. This has very
mixed results on the interpersonal
relationships among the coworkers. Some, who are not yet wizards,
are jealous and angry, and want nothing better than to pretend its all
a series of flukes. Others, want advice, and actively seek it out. Very
few both believe that it is learnable, and that they can learn it, and the
wizard would be glad to work with them to show them how.
After a year or so of that (ie every couple of weeks doing something
"impossible", in a week, without fail), it becomes clear that somehow,
the techniques have be taught to the "experts", and the more junior
programmers have to be guided upwards toward experts. Its also
boring, once the repeated surprise of "it worked again" strarts to be
"well, if it doesn't work this time, I'll certainly be surprised, and learn, if
I can WHY this time was different, and the standard method didn't work, or took
longer than the standard time before it worked."
Those programmers who have done all, or at least most of, the list of NECESSARY
items above, are prepared to be taught.the finer points of language
architecture, or the pros and cons of different languages. The person who
hasn't done the above list is simply too inexperienced to be able to understand
Those who read this who believe they are "interested" and "ready" or "nearly
ready" to move their skills up, and who aren't afraid of work to get there, I'd
be happy to get private replies. If there aren't too many, I'll try to assess
where you are (based on what you tell me) and what would be a "good thing" for
you to do next. If there are "too many", I'll have to think of some other way
to solve the problem. But despite my not very high skill level as a teacher,
I should say I've done some things that the list as a whole need not know, and
probably would not believe.
The Lisp Machine makes extensive use of them. For instance, the OPEN
function has a restart that retries opening with a new filename, and the
system has a standard handler for FILE-NOT-FOUND that prompts for a new
filename and invokes this restart. There's a restart for UNBOUND-VARIABLE
that assigns the variable and retries. And aborting back to a
read-eval-print loop is done using the condition restart mechanism.
that's one point we can agree on
> I weigh in on the side of thow/catch being "dirty". I also
> weigh in on the presumably more controversial side of
> even return-from as being "dirty", not to mention a
> go outside of the originating tagbody.
i would prefer not to be too dogmatic about this. a lot depends on the
compiler you are using. one thing i like about throw / catch (or, even
better, a good exception system) that it keeps the code clearer: you
concentrate on the logic of the code you write, check for exceptional
conditions and deal with them where appropriate, rather than disfigure
it with checking for them and passing them on at every level in the call
chain. at least with throw / catch you have a language construct that
lets the compiler know that there are irregularities in the control
flow, information that can be exploited in the data flow analysis
> The "dirtiness" arises partly from the lack of reusability of the resulting
why is it not reusable? those are well defined constructs, with the
language standard specifying pretty clearly how it has to be handled
> code. It also arises because all the intervening unwind-protect code sequences
> must be executed "during" the non-local transfer, EVEN if the user doesn't
> THINK he wrote any (they are used in macros, and something akin to them must be
> used to "unbind" the bindings of special variables (in almost all
> implementations that support special variables)).
so? i woyld rather have the language look after this than have to do it
> A "clean" piece of code in any language should, ideally, never, EVER, for ANY
> arguments whatsoever, do a non-local transfer of control,
> of any kind. REASON: suppose you want to use that piece of code
> to compute the "answer vector" to a suitable vector of "argument lists".
> Any non-local transfer of control will mean that, unless "all" of the vector
> elements properly "work", even if the "answer" computed is "you gave me bad
> arguments", then not all of the answers will be computed: you can't "split" the
> control and "throw" for "some" elements, and not for others, UNLESS some
> explicit futzing is done by every such user of the code. Even if the language
> supports restarts, it is harder to use the original function.
beautiful philosophy, but what are you going to do when deep deep deep
in the control structure you encounter a situation that simply can't be
handled any more? passing an error code back, checking it out at the
caller level, only to pass it on to the nect level, does nothing but
obfuscate the code. i would even argue that throw / catch and
exceptions improve the reusability of your code: you can write all your
utility routines under the assumption that they can proceed with the
task they are supposed to do. any exceptional situation is handled by
the programmer supplied exception handlers. when you write a program
that has to deal with those situations, you better know where you have
to insert code to handle exceptions
It is better to fill your days with life than your life with days
If the French "computer science" teachers have not changed since the last
time I saw them (13 years ago), I can tell you that indeed a lot of them
will find all those words dirty. In fact they consider that the whole
concept of programming to produce something useful is a rather disgusting
Ok they may have changed...
I'd like an example of a GO outside of the "originating TAGBODY" that you
consider dirty. (hint: maybe you don't know the language very well...)
| The "dirtiness" arises partly from the lack of reusability of the
| resulting code.
reusable code is best defined as _functions_ that you can call from other
bodies or code than the body of code it was originally designed to serve.
reusable code is worst defined as code that can be copied (i.e., cut and
paste) from one body of code to another -- yet that is what most people
consider it to be, mostly because they don't have sufficiently powerful
function concepts and engage in programming-by-cut-and-paste themselves.
"reusable code" is nothing more than a new name for "write library code
whenever you can" which is really nothing more than a way of expressing
the age-old concept of "abstraction". of course, when you change focus
or angle, but keep staring at the same old thing, you need a new name so
the people who got disappointed the first few times won't notice the very
same thing you've all been staring at.
| A "clean" piece of code in any language should, ideally, never, EVER, for
| ANY arguments whatsoever, do a non-local transfer of control, of any kind.
and _I_ think this argument is nothing more than the extremely naīve
argument used when people want 100% security or want to abolish accidents
completely, and then they go hunting for someone to sue whenever a
security violation or an accident happens, as if nature itself offended
them by not submitting to their wishful thinking.
the answer to such folly as "if only the world were ideal..." is simply:
"if only the ideals were wordly...".
I'm not afraid to implement what I think will have an ideal _interface_
using whatever dirty tricks are _necessary_ (but none beyond that, of
course). I don't _want_ people to peek inside my function bodies and
reuse the code with cut and paste, nor do I want people to fuck with the
code so there will be hundreds of incompatible versions hacked on by
people who have no regard for the abstract task it was designed to
perform -- I want people to call the function when they can and call me
if they can't.
the desire to abolish accidents is _highly_ irrational. the same goes
for the desire never to see "dirty" code. what it means is that somebody
else should pay an enormous price for the wishful thinking to come true,
which is quite typical of irrational political desires. for some reason,
the United States is the only country on earth where accidents don't
happen -- it's always somebody's fault, and you can sue that somebody for
_neglect_. the same goes for "dirty" code -- if you have to code so
verbosely that you can't finish typing in finite time, that's somehow
better than using a safe mechanism for non-local transfer of control --
and the result is the same as the litigous American society: people lose
the ability to deal with the exceptional that is still within the normal.
this is not to say that certain tasks cannot be written in some "ideal"
way according to some otherwordly school of thought, but the belief that
anybody uses non-local transfer of control _wantonly_ is offensive to any
Since Lisp, CL at least, is accurately characterized as being a ball
of mud and mud is wet dirt, much of lisp is "dirty". What of it?
Sent via Deja.com http://www.deja.com/
Before you buy.
I'd like an example of a GO outside of the "originating TAGBODY" that you
consider dirty. (hint: maybe you don't know the language very well...)
| The "dirtiness" arises partly from the lack of reusability of the
| resulting code.
(defun foo(&aux x y)
(setq y 1)
(setq x #'(lambda (z)
(fiddle-faddle x y z)
(when (< z 100)
(setq y (funcall x (* y 2)))
I phrased it in a way subject to misintrepretation. Since the
dynamic scope of a tag ends with the dynamic scope of the
tagbody, it is not legal tn CL to create closures over recursive
tagbodies, and go to tags for tagbodies that have been exited.
(Although I strongly suspect some implementations do not detect
the problem, and some may even support it properly.).
I should have sait, something like:
"not to mention a go which is not in the same binding contour of
of the target of the go (that is, not inside a closure, let, progv,
let*, or similar construct that must cause the execution of the go
to do something more than a simple transfer of control."
Of course, that would then widen the discussion beyond the level
of understanding of the original poster.
As you have suggested I do not know CL, I will admit that it is a long
learning experience - I'm on my third copy of Steele's manual, as the
bindings keep falling apart. My first CL introduction was pre-Mary Poppins
edition. (late 1970's or early 1980's) My first Lisp goes back
Note I left fiddle-faddle undefined.
Suppose it were:
(defun fiddle-faddle (op v1 v2)
(funcall op (+ 3 v1 ( foo (+ 17 v2)) ))
Thus, foo is called recursively, so there are multiple closures and
tagbodies active. I haven't properly "designed" or debugged, or
tested this code. I'm not sure it teminates, but I THINK it does.
As to "throw/catch" being "needed". That is true, WHEN you
have to glue together stuff in a hurry, and can't change the interfaces
to some of it.
Some poster accused me also of "living in an ivory tower". Having
shipped several database engines, several compilers, and several
operatings systems, and miscellaneous other work, I assure you I
can get it out the door, with warts and all, as well as anyone I know.
BUT: it is far better to make the result value have one or more
"exceptional" values (such as null in ANSII SQL, or the NANs of
IEEE floating point arithmetic). The operations all "propagate" the
exceptional VALUE, without any non-local control transfer.
I find it obvious, that since IEEE floating point arithmentic is often
used for vector and matrix operations, on lots of numbers, and since
SQL is often called on to operate on entire sets of rows, that the
support is there, for EXCEPTIONAL VALUES. As I said in the
first post on this issue: non-local transfer of control is dirty, because
it does not "reuse" well.
Said in a more sophisticated way, incomprehensible probably to the
original poster, neither functors nor combinators in general, nor
any function that maps a function over a set of values can easily
use any function that sometimes throws, unless the function is first
"cleaned up" by wrapping a catch, return exceptional value piece
around it. And repetitively "cleaning up" every time I reuse a function
means the function has a bad behavior.
I also NEVER cut and paste - in such circumstances, I use one of
several techniques: virtual machines, macros, or application specific
code generators, or, sometimes, compiler-compilers (I've designed,
and implemented, 3 different ones so far - based on different
formal language techniques).
Finally: I may be "ivory tower", but I've shipped product with zero bugs
reported, for a major computer software vendor, and with many
"enhancement" requests reported. The project shipped on time.
Over 600 productions; multi-threaded, and it just worked. I guess
I got "lucky".
Ah, that's the problem with dynamic bindings.
It would be nice if you followed the usual conventions for quoting. I
had to look at this article for a while and then the one it responded
to before I figured out that you'd simply repeated the message you
responded to unchanged and then added your own text underneath that.
Just a tip.
well, I asked for an example, got it, and now I'm asking for more. why
this "strongly suspect" point? it seems you're creating problems for the
purpose of creating or showing off complexity, not for the opposing, much
more reasonable, purpose of solving them and reducing overall complexity.
(this is a more verbose version of the "ivory tower" accusation.)
| Thus, foo is called recursively, so there are multiple closures and
| tagbodies active.
a GO is explicitly defined to reach the innermost accessible tag, so this
is not a semantic problem. it may be a pragmatic or stylistic problem in
that you can't easily figure out which tag is the innermost, however.
| As to "throw/catch" being "needed". That is true, WHEN you have to glue
| together stuff in a hurry, and can't change the interfaces to some of it.
I think you should consider the possibility that you have overlooked
something when you make sweeping generalizations like this. it is quite
annoying to have to deal with statements that are true but incomplete,
yet false when completed or extended to their natural context. that is,
your assessment of the situation is relevant, yet not the only one that
needs to be considered, and therefore, the conclusion does not hold for
anyone but those who restrict themselves to your particular context.
again, "ivory tower" might apply to such strong yet narrow arguments.
| BUT: it is far better to make the result value have one or more
| "exceptional" values (such as null in ANSII SQL, or the NANs of IEEE
| floating point arithmetic). The operations all "propagate" the
| exceptional VALUE, without any non-local control transfer.
I was disappointed when waiting for the capitalized WHEN to support the
"it is far better" sweeping generalization sans reservations or context.
in some contexts, what you propose is indeed a good idea. few people use
CATCH/THROW or other exception-handling mechanisms in such contexts, for
the very simple reason that the first time they run into a problem, they
will most probably swear and even more probably redesign their code. in
the context of an exception-handling mechanism that is ill-designed, we
do have the option of talking to the people who wrote the code and even
in many cases to do what you consider so gross -- to wrap up the code in
some advice or whatever to protect you from harm, but doing so in cases
where it clearly has no value is an argument against your generalization.
| As I said in the first post on this issue: non-local transfer of control
| is dirty, because it does not "reuse" well.
yes, this is so, in _some_ contexts, but I'm getting increasingly curious
why you exclude all _other_ contexts as inherently irrelevant to any
discussion of this language feature or of exception-handling in general.
| Said in a more sophisticated way, incomprehensible probably to the
| original poster, neither functors nor combinators in general, nor any
| function that maps a function over a set of values can easily use any
| function that sometimes throws, unless the function is first "cleaned up"
| by wrapping a catch, return exceptional value piece around it. And
| repetitively "cleaning up" every time I reuse a function means the
| function has a bad behavior.
this is obviously a bogus general claim. most of the time, we are not
faced with irreversible side effects of our functions, and we are not
therefore in need of a transactional approach to "committing" or not
"committing" whole executions of complex pieces of code. it helps, and
I'll easily grand you that, to know _when_ to require a simple "completed
or not done at all" result from a function, however.
"best effort"-functions that return some "impossible" values may actually
have the annoying consequence that the failure mode is _less_ predictable
than an exception, as _some_ transactions were "committed" after some
failure had occurred, meaning that the failed transactions now have to be
committed out of order, or not at all, which is very different from an
_aggregate_ "commit". I hope you appreciate this distinction.
I'm not interested in your ad hominem arguments: just because you have
shipped so-and-so-products does not lend any credibility to any of your
arguments -- I'm _not_ interested in who you are or what you have done; I
_am_ interested in whether you can support your sweeping arguments
without reference to such claim to fame or credentials, the inclusion of
which in my view detracts very significantly from effective argumentation.
#:Erik, who's beginning to discover that vacations have serious down-sides :)