Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Python from Wise Guy's Viewpoint

46 views
Skip to first unread message

mik...@ziplip.com

unread,
Oct 19, 2003, 7:18:31 AM10/19/03
to
THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching

Jarek Zgoda

unread,
Oct 19, 2003, 8:51:00 AM10/19/03
to
mik...@ziplip.com <mik...@ziplip.com> pisze:

> 8. Can you undefine a function, value, class or unimport a module?
> (If the answer is no to any of these questions, Python is simply
> not interactive enough)

Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.

--
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ JID:ja...@jabberpl.org http://zgoda.jogger.pl/

Frode Vatvedt Fjeld

unread,
Oct 19, 2003, 9:24:18 AM10/19/03
to
> mik...@ziplip.com <mik...@ziplip.com> pisze:
>
>> 8. Can you undefine a function, value, class or unimport a module?
>> (If the answer is no to any of these questions, Python is simply
>> not interactive enough)

Jarek Zgoda <jzg...@gazeta.usun.pl> writes:

> Yes. By deleting a name from namespace. You better read some
> tutorial, this will save you some time.

Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

--
Frode Vatvedt Fjeld

Peter Hansen

unread,
Oct 19, 2003, 9:19:14 AM10/19/03
to

Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)

-Peter

John Thingstad

unread,
Oct 19, 2003, 9:41:04 AM10/19/03
to
On Sun, 19 Oct 2003 15:24:18 +0200, Frode Vatvedt Fjeld <fro...@cs.uit.no>
wrote:

Neither is complely correct. Functions are internally delt with using
dictionaies.
The bytecode compiler gives it a ID and the look up is done using a
dictionary.
Removing the function from the dictionary removes the function.
(pythonese for hash-table)


--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

Frode Vatvedt Fjeld

unread,
Oct 19, 2003, 9:47:26 AM10/19/03
to
Peter Hansen <pe...@engcorp.com> writes:

> Both are correct, in essence. (And depending on how one interprets
> your second point, which is quite ambiguous.)

Frode Vatvedt Fjeld wrote:

>> 1. Function names (strings) are resolved (looked up in the
>> namespace) each time a function is called.

But this implies a rather enormous overhead in calling a function,
doesn't it?

>> 2. You can't really undefine a function such that existing calls to
>> the function will be affected.

What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.

--
Frode Vatvedt Fjeld

Peter Hansen

unread,
Oct 19, 2003, 10:20:11 AM10/19/03
to
(I'm replying only because I made the mistake of replying to a
triply-crossposted thread which was, in light of that, obviously
troll-bait. I don't plan to continue the thread except to respond
to Frode's questions. Apologies for c.l.p readers.)

Frode Vatvedt Fjeld wrote:
>
> Peter Hansen <pe...@engcorp.com> writes:
>
> > Both are correct, in essence. (And depending on how one interprets
> > your second point, which is quite ambiguous.)
>
> Frode Vatvedt Fjeld wrote:
>
> >> 1. Function names (strings) are resolved (looked up in the
> >> namespace) each time a function is called.
>
> But this implies a rather enormous overhead in calling a function,
> doesn't it?

"Enormous" is of course relative. Yes, the overhead is more than in,
say C, but I think it's obvious (since people program useful software
using Python) that the overhead is not unacceptably high?

As John Thingstad wrote in his reply, there is a dictionary lookup
involved and dictionaries are extremely fast (yes, yet another relative
term... imagine that!) in Python so that part of the overhead is
relatively unimportant. There is actually other overhead which is
involved (e.g. setting up the stack frame which is, I believe, much larger
than the trivial dictionary lookup).

Note also that if you have a reference to the original function is,
say, a local variable, removing the original doesn't really remove it,
but merely makes it unavailable by the original name. The local variable
can still be used to call it.

> >> 2. You can't really undefine a function such that existing calls to
> >> the function will be affected.
>
> What I meant was that if you do the following, in sequence:
>
> a. Define function foo.
> b. Define function bar, that calls function foo.
> c. Undefine function foo
>
> Now, if you call function bar, will you get a "undefined function"
> exception? But if point 1. really is true, I'd expect you get a
> "undefined name" execption or somesuch.

See below.

Python 2.3.1 (#47, Sep 23 2003, 23:47:32) [MSC v.1200 32 bit (Intel)] on win32
>>> def foo():
... print 'in foo'
...
>>> def bar():
... foo()
...
>>> bar()
in foo
>>> del foo
>>> bar()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 2, in bar
NameError: global name 'foo' is not defined

On the other hand, as I said above, one can keep a reference to the original.
If I'd done "baz = foo" just before the "del foo", then I could easily have
done "baz()" and the original method would still have been called.

Python is dynamic. Almost everything is looked up in dictionaries at
runtime like this. That's its nature, and much of its power (as with
the many other such languages).

-Peter

Frode Vatvedt Fjeld

unread,
Oct 19, 2003, 1:38:40 PM10/19/03
to
John Thingstad <john.th...@chello.no> writes:

> [..] Functions are internally delt with using dictionaies. The


> bytecode compiler gives it a ID and the look up is done using a
> dictionary. Removing the function from the dictionary removes the
> function. (pythonese for hash-table)

So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

--
Frode Vatvedt Fjeld

Paul Rubin

unread,
Oct 19, 2003, 2:04:20 PM10/19/03
to
Frode Vatvedt Fjeld <fro...@cs.uit.no> writes:
> > [..] Functions are internally delt with using dictionaies. The
> > bytecode compiler gives it a ID and the look up is done using a
> > dictionary. Removing the function from the dictionary removes the
> > function. (pythonese for hash-table)
>
> So to get from the ID to the bytecode, you go through a dictionary?
> And the mapping from name to ID happens perhaps when the caller is
> bytecode-compiled?

Hah, you wish. If the function name is global, there is a dictionary
lookup, at runtime, on every call.

def square(x):
return x*x

def sum_of_squares(n):
sum = 0
for i in range(n):
sum += square(x)
return sum

print sum_of_squares(100)

looks up "square" in the dictionary 100 times. An optimization:

def sum_of_squares(n):
sum = 0
sq = square
for i in range(n):
sum += sq(x)
return sum

Here, "sq" is a local copy of "square". It lives in a stack slot in
the function frame, so the dictionary lookup is avoided.

Joachim Durchholz

unread,
Oct 19, 2003, 2:01:03 PM10/19/03
to
Oh, you're trolling for an inter-language flame fest...
well, anyway:

> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> about them) You now have to suffer from visitor patterns, etc. like
> lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being correct).

Regards,
Jo

Marcin 'Qrczak' Kowalczyk

unread,
Oct 19, 2003, 2:38:13 PM10/19/03
to
On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:

> The longer answer: Multimethods have modularity issues (if whatever domain
> they're dispatching on can be extended by independent developers:
> different developers may extend the dispatch domain of a function in
> different directions, and leave undefined combinations;

This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Alex Martelli

unread,
Oct 19, 2003, 3:09:04 PM10/19/03
to
Frode Vatvedt Fjeld wrote:
...

> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
>
> 1. Function names (strings) are resolved (looked up in the
> namespace) each time a function is called.
>
> 2. You can't really undefine a function such that existing calls to
> the function will be affected.
>
> Is this (i.e. one of these) correct?

Both, depending on how you define "existing call". A "call" that IS
in fact existing, that is, pending on the stack, will NOT in any way
be "affected"; e.g.:

def foo():
print 'foo, before'
remove_foo()
print 'foo, after'

def remove_foo():
print 'rmf, before'
del foo
print 'rmf, after'

the EXISTING call to foo() will NOT be "affected" by the "del foo" that
happens right in the middle of it, since there is no further attempt to
look up the name "foo" in the rest of that call's progress.

But any _further_ lookup is indeed affected, since the name just isn't
bound to the function object any more. Note that other references to
the function object may have been stashed away in many other places (by
other names, in a list, in a dict, ...), so it may still be quite
possible to call that function object -- just not to look up its name
in the scope where it was earlier defined, once it has been undefined.

As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet accurately.
So, on my 30-months-old creaky main box (I keep mentioning its venerable
age in the hope Santa will notice...:-)...:

[alex@lancelot ext]$ timeit.py -c -s'def foo():pass' 'foo'
10000000 loops, best of 3: 0.143 usec per loop
[alex@lancelot ext]$ timeit.py -c -s'def foo():return' 'foo()'
1000000 loops, best of 3: 0.54 usec per loop

So: a name lookup takes about 140 nanoseconds; a name lookup plus a
call of the simplest possible function -- one that just returns at
once -- about 540 nanoseconds. I.e., the call itself plus the
return take about 400 nanoseconds _in the simplest possible case_;
the lookup adds a further 140 nanoseconds, accounting for about 25%
of the overall lookup-call-return pure overhead.

Yes, managing less than 2 million function calls a second, albeit on
an old machine, is NOT good enough for some applications (although,
for many of practical importance, it already is). But the need for speed
is exactly the reason optimizing compilers exist -- for those times
in which you need MANY more millions of function calls per second.
Currently, the best optimizing compiler for Python is Psyco, the
"specializing compiler" by Armin Rigo. Unfortunately, it currently only
only supports Intel-386-and-compatible CPU's -- so I can use it on my
old AMD Athlon, but not, e.g., on my tiny Palmtop, whose little CPU is
an "ARM" (Intel-made these days I believe, but not 386-compatible)
[ for plans by Armin, and many others of us, on how to fix that in the
reasonably near future, see http://codespeak.net/pypy/ ]

Anyway, here's psyco in action on the issue in question:

import time
import psyco

def non_compiled(name):
def foo(): return
start = time.clock()
for x in xrange(10*1000*1000): foo()
stend = time.clock()
print '%s %.2f' % (name, stend-start)

compiled = psyco.proxy(non_compiled)

non_compiled('noncomp')
compiled('psycomp')


Running this on the same good old machine produces:

[alex@lancelot ext]$ python2.3 calfoo.py
noncomp 5.93
psycomp 0.13

The NON-compiled 10 million calls took an average of 593 nanoseconds
per call -- roughly the already-measured 540 nanoseconds for the
call itself, plus about 50 nanoseconds for each leg of the loop's
overhead. But, as you can see, Psyco has no trouble optimizing that
by over 45 times -- to about 80 million function calls per second,
which _is_ good enough for many more applications than the original
less-than-2 million function calls per second was.

Psyco entirely respects Python's semantics, but its speed-ups take
particular good advantage of the "specialized" cases in which the
possibilities for extremely dynamic behavior are not, in fact, being
used in a given function that's on the bottleneck of your application
(Psyco can also automatically use a profiler to find out about that
bottleneck, if you want -- here, I used the finer-grained approach
of having it compile ["build a compiled proxy for"] just one function
in order to be able to show the speed-ups it was giving).

Oh, BTW, you'll notice I explicitly ran that little test with
python2.3 -- that was to ensure I was using the OLD release of
psyco, 1.0; as my default Python I use the current CVS snapshot,
and on that one I have installed psyco 1.1, which does more
optimizations and in particular _inlines function calls_ under
propitious conditions -- therefore, the fact that running
just "python calfoo.py" would have shown a speed-up of _120_
(rather than just 45) would have been "cheating", a bit, as it's
not measuring any more anything related to name lookup and function
call overhead. That's a common problem with optimizing compilers:
once they get smart enough they may "optimize away" the very
construct whose optimization you were trying to check with a
sufficiently small benchmark. I remember when the whole "SPEC"
suite of benchmarks was made obsolete at a stroke by one advance
in compiler optimization techniques, for example:-).

Anyway, if your main interest is in having your applications run
fast, rather than in studying optimization yields on specific
constructs in various circumstances, be sure to get the current
Psyco, 1.1.1, to go with the current Python, 2.3.2 (the pre-alpha
Python 2.4a0 is recommended only to those who want to help with
Python's development, including testing -- throughout at least 2004
you can count on 2.3.something, NOT 2.4, being the production,
_stable_ version of Python, recommended to all).


Alex

mik...@ziplip.com

unread,
Oct 19, 2003, 3:40:25 PM10/19/03
to
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...

I'm stimulating a productive discussion / polemic. That's what they try
to do in classrooms, for example. If you want to engage in a flame-fest,
do not blame anyone but yourself. Also read my post "Cross-posting is good"
regarding this.

>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>> about them) You now have to suffer from visitor patterns, etc. like
>> lowly Java monkeys.
>
> Multimethods suck.
>

> The longer answer: Multimethods have modularity issues (if whatever
> domain they're dispatching on can be extended by independent developers:
> different developers may extend the dispatch domain of a function in

> different directions, and leave undefined combinations; standard
> dispatch strategies as I've seen in some Lisps just cover up the
> undefined behaviour, with a slightly less than 50% chance of being
> correct).

To me this is like saying "uni-methods are bad because they can be
called on objects that do not have them" or "functions are bad because
they can be given an argument they do not expect". If you disagree and
think that multimethods are a BIGGER problem than uni-methods, please
provide a specific example (in CLOS)

Alex Martelli

unread,
Oct 19, 2003, 5:20:38 PM10/19/03
to
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
>
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>> about them) You now have to suffer from visitor patterns, etc. like
>> lowly Java monkeys.
>
> Multimethods suck.

Multimethods are wonderful, and we're using them as part of the
implementation of pypy, the Python runtime coded in Python. Sure,
we had to implement them, but that was a drop in the ocean in
comparison to the amount of other code in pypy as it stands, much
less the amount of code we want to add to it in the future. See
http://codespeak.net/ for more about pypy (including all of its
code -- subversion makes it available for download as well as for
online browsing).

So, you're both wrong:-).


Alex

Pascal Costanza

unread,
Oct 19, 2003, 5:20:54 PM10/19/03
to
Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
>
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>> about them) You now have to suffer from visitor patterns, etc. like
>> lowly Java monkeys.
>
>
> Multimethods suck.

Do they suck more or less than the Visitor pattern?

> The longer answer: Multimethods have modularity issues (if whatever
> domain they're dispatching on can be extended by independent developers:
> different developers may extend the dispatch domain of a function in
> different directions, and leave undefined combinations; standard
> dispatch strategies as I've seen in some Lisps just cover up the
> undefined behaviour, with a slightly less than 50% chance of being
> correct).

So how do you implement an equality operator correctly with only single
dynamic dispatch?


Pascal

Kenny Tilton

unread,
Oct 19, 2003, 6:22:50 PM10/19/03
to

Joachim Durchholz wrote:

> Oh, you're trolling for an inter-language flame fest...
> well, anyway:
>
>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>> about them) You now have to suffer from visitor patterns, etc. like
>> lowly Java monkeys.
>
>
> Multimethods suck.
>
> The longer answer: Multimethods have modularity issues

Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Kenny Tilton

unread,
Oct 19, 2003, 6:30:48 PM10/19/03
to

Kenny Tilton wrote:

>
>
> Joachim Durchholz wrote:
>
>> Oh, you're trolling for an inter-language flame fest...
>> well, anyway:
>>
>>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>>> about them) You now have to suffer from visitor patterns, etc. like
>>> lowly Java monkeys.
>>
>>
>>
>> Multimethods suck.
>>
>> The longer answer: Multimethods have modularity issues
>
>
> Lisp consistently errs on the side of more expressive power. The idea of
> putting on a strait jacket while coding to protect us from ourselves
> just seems batty. Similarly, a recent ex-C++ journal editor recently
> wrote that test-driven development now gives him the code QA peace of
> mind he once sought from strong static typing. An admitted former static
> typing bigot, he finished by wondering aloud, "Will we all be coding in
> Python ten years from now?"

http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Tomasz Zielonka

unread,
Oct 19, 2003, 6:37:46 PM10/19/03
to
Kenny Tilton wrote:
>
> Lisp consistently errs on the side of more expressive power. The idea of
> putting on a strait jacket while coding to protect us from ourselves
> just seems batty. Similarly, a recent ex-C++ journal editor recently
> wrote that test-driven development now gives him the code QA peace of
> mind he once sought from strong static typing.

C++ is not the best example of strong static typing. It is a language
full of traps, which can't be detected by its type system.

> An admitted former static typing bigot, he finished by wondering
> aloud, "Will we all be coding in Python ten years from now?"
>
> kenny

Best regards,
Tom

--
.signature: Too many levels of symbolic links

Scott McIntire

unread,
Oct 19, 2003, 6:39:41 PM10/19/03
to

"Kenny Tilton" <kti...@nyc.rr.com> wrote in message
news:_8Ekb.7543$pT1...@twister.nyc.rr.com...

>
>
> Joachim Durchholz wrote:
>
> > Oh, you're trolling for an inter-language flame fest...
> > well, anyway:
> >
> >> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> >> about them) You now have to suffer from visitor patterns, etc. like
> >> lowly Java monkeys.
> >
> >
> > Multimethods suck.
> >
> > The longer answer: Multimethods have modularity issues
>
> Lisp consistently errs on the side of more expressive power. The idea of
> putting on a strait jacket while coding to protect us from ourselves
> just seems batty. Similarly, a recent ex-C++ journal editor recently
> wrote that test-driven development now gives him the code QA peace of
> mind he once sought from strong static typing. An admitted former static
> typing bigot, he finished by wondering aloud, "Will we all be coding in
> Python ten years from now?"
>
> kenny
>

There was a nice example from one of the ILC 2003 talks about a Europian
Space Agency rocket exploding with a valueable payload. My understanding was
that there was testing, but maybe too much emphasis was placed the static
type checking of the language used to control the rocket. The end result was
a run time arithmetic overflow which the code intepreted as "rocket off
course". The rocket code instructions in this event were to self destruct.
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.

I'd be interested in hearing more about this from someone who knows the
details.

-R. Scott McIntire


Alex Martelli

unread,
Oct 19, 2003, 6:39:41 PM10/19/03
to
Frode Vatvedt Fjeld wrote:

> John Thingstad <john.th...@chello.no> writes:
>
>> [..] Functions are internally delt with using dictionaies. The

Rather, _names_ are dealt that way (for globals; it's faster for
locals -- then, the compiler can turn the name into an index
into the table of locals' values), whether they're names of functions
or names of other values (Python doesn't separate those namespaces).

>> bytecode compiler gives it a ID and the look up is done using a
>> dictionary. Removing the function from the dictionary removes the
>> function. (pythonese for hash-table)
>
> So to get from the ID to the bytecode, you go through a dictionary?

No; it's up to the implementation, but in CPython the id is the
memory address of the function object, so the bytecode's directly
accessed from there (well, there's a couple of indirectness --
function object to code object to code string -- nothing important).

> And the mapping from name to ID happens perhaps when the caller is
> bytecode-compiled?

No, it's a lookup. Dict lookup for globals, fast (index in table)
lookup for locals (making locals much faster to access), but a
lookup anyway. I've already posted about how psyco can optimize
this, being a specializing compiler, when it notices the dynamic
possibilities are not being used in a given case.


Alex


Terry Reedy

unread,
Oct 19, 2003, 7:27:52 PM10/19/03
to

"Scott McIntire" <mcintire_c...@comcast.net> wrote in message
news:MoEkb.821534$YN5.832338@sccrnsc01...

> There was a nice example from one of the ILC 2003 talks about a
Europian
> Space Agency rocket exploding with a valueable payload. My
understanding was
> that there was testing, but maybe too much emphasis was placed the
static
> type checking of the language used to control the rocket. The end
result was
> a run time arithmetic overflow which the code intepreted as "rocket
off
> course". The rocket code instructions in this event were to self
destruct.
> It seems to me that the Agency would have fared better if they just
used
> Lisp - which has bignums - and relied more on regression suites and
less on
> the belief that static type checking systems would save the day.
>
> I'd be interested in hearing more about this from someone who knows
the
> details.

I believe you are referring to the first flight of the Ariane 5
(sp?). The report of the investigating commission is on the web
somewhere and an interesting read. They identified about five
distinct errors. Try google.

Terry


Tom Breton

unread,
Oct 19, 2003, 8:39:44 PM10/19/03
to
mik...@ziplip.com writes:

> Joachim Durchholz wrote:
>
> > Oh, you're trolling for an inter-language flame fest...
>
> I'm stimulating a productive discussion / polemic. That's what they try
> to do in classrooms, for example. If you want to engage in a flame-fest,
> do not blame anyone but yourself. Also read my post "Cross-posting is good"
> regarding this.

Translation, you're trolling for an inter-language flame fest. Just
like he said.


--
Tom Breton at panix.com, username tehom. http://www.panix.com/~tehom

Pascal Bourguignon

unread,
Oct 19, 2003, 10:47:44 PM10/19/03
to
Dennis Lee Bieber <wlf...@ix.netcom.com> writes:
> LISP wouldn't have helped -- since the A-4 code was supposed to
> failure with values that large... And would have done the same thing if
> plugged in the A-5. (Or are you proposing that the A-4 code is supposed
> to ignore a performance requirement?)

Or perhaps it would have helped since LISP sources would have included
a little expert system that would have asked itself: "Do I really want
to commit suicide now? Let's see, everything looks ok but this old
code from A4... I guess it's got Alzheimer, I'll ignore it for now".


--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(

Kenny Tilton

unread,
Oct 19, 2003, 11:49:57 PM10/19/03
to

Dennis Lee Bieber wrote:

> Scott McIntire fed this fish to the penguins on Sunday 19 October 2003
> 15:39 pm:


>
>
>>There was a nice example from one of the ILC 2003 talks about a
>>Europian Space Agency rocket exploding with a valueable payload. My
>>understanding was that there was testing, but maybe too much emphasis
>>was placed the static type checking of the language used to control
>>the rocket. The end result was a run time arithmetic overflow which
>>the code intepreted as "rocket off course". The rocket code
>>instructions in this event were to self destruct. It seems to me that
>>the Agency would have fared better if they just used Lisp - which has
>>bignums - and relied more on regression suites and less on the belief
>>that static type checking systems would save the day.
>>
>> I'd be interested in hearing more about this from someone who knows
>> the
>>details.
>>
>

> Just check the archives for comp.lang.ada and Ariane-5.
>
> Short version: The software performed correctly, to specification
> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
> DESIGNED.

Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a
data conversion from 64-bit floating point to 16-bit signed integer
value. The floating point number which was converted had a value greater
than what could be represented by a 16-bit signed integer. This resulted
in an Operand Error. The data conversion instructions (in Ada code) were
not protected from causing an Operand Error, although other conversions
of comparable variables in the same place in the code were protected.
The error occurred in a part of the software that only performs
alignment of the strap-down inertial platform. This software module
computes meaningful results only before lift-off. As soon as the
launcher lifts off, this function serves no purpose."


> LISP wouldn't have helped -- since the A-4 code was supposed to
> failure with values that large... And would have done the same thing if
> plugged in the A-5. (Or are you proposing that the A-4 code is supposed
> to ignore a performance requirement?)

"supposed to" fail? chya. This was nothing more than an unhandled
exception crashing the sytem and its identical backup. Other conversions
were protected so they could handle things intelligently, this bad boy
went unguarded. Note also that the code functionality was pre-ignition
only, so there is no way they were thinking that a cool way to abort the
flight would be to leave a program exception unhandled.

What happened (aside from an unnecessary chunk of code running
increasing risk to no good end) is that the extra power of the A5 caused
oscillations greater than those seen in the A4. Those greater
oscillations took the 64-bit float beyond what would fit in the 16-bit
int. kablam. Operand Error. This is not a system saying "whoa, out of
range, abort".

As for Lisp not helping:

> most-positive-fixnum ;; constant provided by implementation
536870911

> (1+ most-positive-fixnum) ;; overflow fixnum type and...
536870912

> (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type
BIGNUM

> (round most-positive-single-float) ;; or floor or ceiling
340282346638528859811704183484516925440
0.0

> (type-of *)
BIGNUM

Thomas F. Burdick

unread,
Oct 20, 2003, 1:46:56 AM10/20/03
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:

> Joachim Durchholz wrote:
>
> > Oh, you're trolling for an inter-language flame fest...
> > well, anyway:
> >
> >> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> >> about them) You now have to suffer from visitor patterns, etc. like
> >> lowly Java monkeys.
> >
> > Multimethods suck.
>
> Multimethods are wonderful, and we're using them as part of the
> implementation of pypy, the Python runtime coded in Python. Sure,
> we had to implement them, but that was a drop in the ocean in
> comparison to the amount of other code in pypy as it stands, much
> less the amount of code we want to add to it in the future.

So do the Python masses get to use multimethods?

(with-lisp-trolling
And have you seen the asymptote yet, or do you need to grow macros first?)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Fergus Henderson

unread,
Oct 20, 2003, 3:38:00 AM10/20/03
to
Kenny Tilton <kti...@nyc.rr.com> writes:

>Dennis Lee Bieber wrote:
>
>> Just check the archives for comp.lang.ada and Ariane-5.
>>
>> Short version: The software performed correctly, to specification
>> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
>> DESIGNED.
>
>Nonsense.

No, that is exactly right. Like the man said, read the archives for
comp.lang.ada.

>From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>
>"The internal SRI software exception was caused during execution of a
>data conversion from 64-bit floating point to 16-bit signed integer
>value. The floating point number which was converted had a value greater
>than what could be represented by a 16-bit signed integer. This resulted
>in an Operand Error. The data conversion instructions (in Ada code) were
>not protected from causing an Operand Error, although other conversions
>of comparable variables in the same place in the code were protected.
>The error occurred in a part of the software that only performs
>alignment of the strap-down inertial platform. This software module
>computes meaningful results only before lift-off. As soon as the
>launcher lifts off, this function serves no purpose."

That's all true, but it is only part of the story, and selectively quoting
just that part is misleading in this context.

For a more detailed answer, see
<http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.

>> LISP wouldn't have helped -- since the A-4 code was supposed to
>> failure with values that large... And would have done the same thing if
>> plugged in the A-5. (Or are you proposing that the A-4 code is supposed
>> to ignore a performance requirement?)
>
>"supposed to" fail? chya. This was nothing more than an unhandled
>exception crashing the sytem and its identical backup. Other conversions
>were protected so they could handle things intelligently, this bad boy
>went unguarded.

The reason that it went unguarded is that the programmers DELIBERATELY
omitted an exception handler for it. The post at the URL quoted above
explains why.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Duncan Booth

unread,
Oct 20, 2003, 5:09:03 AM10/20/03
to
mik...@ziplip.com wrote in
news:LVOAILABAJAFKMCPJ0F1...@ziplip.com:

> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
> 90% of the code is function applictions. Why not make it convenient?

What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or
f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are
not currently ambiguous?

--
Duncan Booth dun...@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?

Alex Martelli

unread,
Oct 20, 2003, 5:40:40 AM10/20/03
to
Duncan Booth wrote:

> mik...@ziplip.com wrote in
> news:LVOAILABAJAFKMCPJ0F1...@ziplip.com:
>
>> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
>> 90% of the code is function applictions. Why not make it convenient?
>
> What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or
> f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are
> not currently ambiguous?

Haskell has it easy -- f x y z is the same as ((f x) y) z -- as an
N-ary function is "conceptualized" as a unary function that returns
an (N-1)-ary function [as Haskell Curry conceptualized it -- which
is why the language is named Haskell, and the concept currying:-)].
So, your 5th case, f(x)(y)(z), would be exactly the same thing.

When you want to apply operators in other than their normal order
of priority, then and only then you must use parentheses, e.g. for
your various cases they would be f (x y z) [1st case], f (x (y z))
[2nd case], f x (y z) [3rd case], f (x y) z [4th case]. You CAN,
if you wish, add redundant parentheses, of course, just like in
Python [where parentheses are overloaded to mean: function call,
class inheritance, function definition, empty tuples, tuples in
list comprehensions, apply operators with specified priority --
I hope I recalled them all;-)].

Of course this will never happen in Python, as it would break all
backwards compatibility. And I doubt it could sensibly happen in
any "simil-Python" without adopting many other Haskell ideas, such
as implicit currying and nonstrictness. What "x = f" should mean
in a language with assignment, everything first-class, and implicit
rather than explicit calling, is quite troublesome too.

Ruby allows some calls without parentheses, but the way it disambiguates
"f x y" between f(x(y)) and f(x, y) is, IMHO, pricey -- it has to KNOW
whether x is a method, and if it is it won't just let you pass it as such
as an argument to f; that's the slippery slope whereby you end up having to
write x.call(y) because not just any object is callable.
"x = f" CALLS f if f is a method, so you can't just treat methods
as first-class citizens like any other... etc, etc...
AND good Ruby texts recommend AVOIDING "f x y" without parentheses,
anyway, because it's ambiguous to a human reader, even when it's
clear to the compiler -- so the benefit you get for that price is
dubious indeed.


Alex

mik...@ziplip.com

unread,
Oct 20, 2003, 6:28:25 AM10/20/03
to
Duncan Booth wrote:

> mik...@ziplip.com wrote in
> news:LVOAILABAJAFKMCPJ0F1...@ziplip.com:
>
>> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
>> 90% of the code is function applictions. Why not make it convenient?
>
> What syntax do you propose to use for

isn't it obvious? What ML, Haskell (and I suppose Dylan) use:

> f(x(y,z)),

f (x y z)

> f(x(y(z))),

f (x (y z))

> f(x,y(z))

f x (y z)

> f(x(y),z)

f (x y) z

> f(x)(y)(z)

(((f x) y) z)

In each of these cases, my version uses as many or fewer syntactic elements
(commas and parens) than yours. The problem with Python is that it also
lacks uniformity here (think "print" and "del").

Now look at some more realistic code

sin(x) + 2 / atan z

sin x + 2 / atan z

the latter is exactly how you would write this in math.

Anyway, I'm not advocating incompatible changes to the language
(that's unrealistic). I'm saying people should be kicked in the nuts when
they try to design their own languages unless they already know a few
*smart* languages (Lisp, Haskell, etc.). Java, C#, Pascall and Basic are not
smart. Also, since many Pythonista aren't programmers, and may never
learn such smart languages, my aim was to give them a little perspective.

You, Pythonista, are like naive little children who have to be
taught about the world, while Haskellers and Lispers are
experienced old men.

But to be _wise_ like me, it's not enough to grow a beard or
know Lisp or Haskell. You have to know *both*, because they are
so different, you have to grok monads and master macros.

Lisp is strict, Haskell is lazy
Lisp is dynamically typed, Haskell is statically typed
Lisp has OO, Haskell does not
Haskell uses monadic IO, Lisp does not
Haskell has indentation-based syntax, Lisp uses S-expressions.

When you know both Lisp and Haskell (and maybe a few other
languages) then other languages seem like small variations
on some old theme (or just plain bad ideas like C# and X++)

Anyway, this discussion is boring me. I'll leave you to
argue amongst yourselves. My job here is done.

420

P.S. Lulu aka David Mertz, I hate to say this, but you are the
BIGGEST idiot in comp.lang.python I've encountered so far.

P.P.S. You can have O(1) function calls in the interpreter (see Lisp).
Just one other thing that was done unWISEly in Python.

Joachim Durchholz

unread,
Oct 20, 2003, 7:06:08 AM10/20/03
to
Marcin 'Qrczak' Kowalczyk wrote:

> On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:
>
>>The longer answer: Multimethods have modularity issues (if whatever domain
>>they're dispatching on can be extended by independent developers:
>>different developers may extend the dispatch domain of a function in
>>different directions, and leave undefined combinations;
>
> This doesn't matter until you provide an equally powerful mechanism which
> fixes that. Which is it?

I don't think there is a satisfactory one. It's a fundamental problem:
if two people who don't know of each other can extend the same thing
(framework, base class, whatever) in different directions, who's
responsible for writing the code needed to combine these extensions?

Solutions that I have seen or thought about are:

1. Let the system decide. Technically feasible for base classes (in the
form of priorisation rules for multimethods), technically infeasible for
frameworks. The problem here is that the system doesn't (usually) have
enough information to reliably make the correct decision.

2. Let the system declare an error if the glue code isn't there.
Effectively prohibits all forms of dynamic code loading. Can create
risks in project management (unexpected error messages during code
integration near a project deadline - yuck). Creates a temptation to
hack the glue code up, by people who don't know the details of the two
modules involved.

3. Disallow extending in multiple directions. In other words, no
multimethods, and live with the asymmetry.
Too restricted to be comfortable with.

4. As (3), but allow multiple extensions if they are contained within
the same module. I.e. allow multiple dispatch within an "arithmetics"
module that defines the classes Integer, Real, Complex, etc. etc., but
don't allow additional multiple dispatch outside the module. (Single
dispatch would, of course, be OK.)

5. As (3), but require manual intervention. IOW let the two authors who
did the orthogonal extensions know about each other, and have each
module refer to the other, and each module carry the glue code required
to combine with the other.
Actually, this is the practice for various open source projects. For
example, authors of MTAs, mail servers etc. cooperate to set standards.
Of course, if the authors aren't interested in cooperating, this doesn't
work well either.

6. Don't use dynamic dispatch, use parametric polymorphism (or whatever
your language offers for that purpose, be it "generics" or "templates").

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 7:13:45 AM10/20/03
to
Pascal Costanza wrote:

> Joachim Durchholz wrote:
>
>> Oh, you're trolling for an inter-language flame fest...
>> well, anyway:
>>
>>> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>>> about them) You now have to suffer from visitor patterns, etc. like
>>> lowly Java monkeys.
>>
>> Multimethods suck.
>
> Do they suck more or less than the Visitor pattern?

Well, the visitor pattern is worse.
Generics would be better though.

> So how do you implement an equality operator correctly with only single
> dynamic dispatch?

Good question.

In practice, you don't use dispatch, you use some built-in mechanism.

Even more in practice, all equality operators that I have seen tended to
compare more or less than one wanted to have compared, at least for
complicated types with large hidden internal structures, or different
equivalent internal structures. I have seen many cases where people
implemented several equality operators - of course, with different
names, and for most cases, I'm under the impression they weren't even
aware that it was equality that they were implementing :-)

Examples are:
Lisp with its multitude of equality predicates nicely exposes the
problems, and provides a solution.
Various string representations (7-bit Ascii, 8-bit Ascii, various
Unicode flavors). Do you want to compare representations or contents? Do
you need a code table to compare?
Various number representation: do you want to make 1 different from 1.0,
or do you want to have them equal?

I think that dynamic dispatch is an interesting answer, but not to
equality :-)

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 7:22:08 AM10/20/03
to
Kenny Tilton wrote:

>
> Dennis Lee Bieber wrote:
>
>> Short version: The software performed correctly, to
>> specification (including the failure mode) -- ON THE ARIANE 4 FOR
>> WHICH IT WAS DESIGNED.
>
> Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>
> "The internal SRI software exception was caused during execution of a
> data conversion from 64-bit floating point to 16-bit signed integer
> value. The floating point number which was converted had a value greater
> than what could be represented by a 16-bit signed integer. This resulted
> in an Operand Error. The data conversion instructions (in Ada code) were
> not protected from causing an Operand Error, although other conversions
> of comparable variables in the same place in the code were protected.
> The error occurred in a part of the software that only performs
> alignment of the strap-down inertial platform. This software module
> computes meaningful results only before lift-off. As soon as the
> launcher lifts off, this function serves no purpose."

That's the sequence of events that led to the crash.
Why this sequence could happen though it shouldn't have happened is
exactly how Dennis wrote it: the conversion caused an exception because
the Ariane-5 had a tilt angle beyond what the SRI was designed for.

> What happened (aside from an unnecessary chunk of code running
> increasing risk to no good end) is that the extra power of the A5 caused
> oscillations greater than those seen in the A4. Those greater
> oscillations took the 64-bit float beyond what would fit in the 16-bit
> int. kablam. Operand Error. This is not a system saying "whoa, out of
> range, abort".
>
> As for Lisp not helping:
>
> > most-positive-fixnum ;; constant provided by implementation
> 536870911
>
> > (1+ most-positive-fixnum) ;; overflow fixnum type and...
> 536870912
>
> > (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type
> BIGNUM
>
> > (round most-positive-single-float) ;; or floor or ceiling
> 340282346638528859811704183484516925440
> 0.0
>
> > (type-of *)
> BIGNUM

Lisp might not have helped even in that case.
1. The SRI was designed for an angle that would have fit into a 16-bit
operand. If the exception hadn't been thrown, some hardware might still
have malfunctioned.
2. I'm pretty sure there's a reason (other than saving space) for that
conversion to 16 bits. I suspect it was to be fed into some hardware
register... in which case all bignums of the world aren't going to help.

Ariane 5 is mostly a lesson in management errors. Software methodology
might have helped, but just replacing the programming language would
have been insufficient (as usual - languages can make proper testing
easier or harder, but the trade-off will always be present).

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 7:32:19 AM10/20/03
to
mik...@ziplip.com wrote:

> [...] If you [...] think that multimethods are a BIGGER problem than

> uni-methods, please provide a specific example (in CLOS)

I can't write in CLOS, I'd make all sorts of stupid mistakes since I
never read more than the language specs (and that's several years in the
past).
Anyway, I lost interest in CLOS when I saw those clumsy BEFORE and AFTER
keywords, and that priorization machinery for multimethods. Too
complicated, too liberal (allowing lots of powerful things and lots of
subtle bugs).
I might be conflating this with Scheme, though. I looked at both at
about the same time :-)

I'd really like to see a Lisp dialect that valued reliability over raw
expressive power. But I fear this isn't very high on the agenda of the
Lisp community. Besides, it would be difficult to do that - Lisp offers
no protection against peeking at internals and setting up all that
unsafe-but-powerful stuff. In my eyes, Lisp is a valuable
experimentation lab for new language mechanisms, but not fit for
production use.
Let me add a troll-bait disclaimer: Actually I don't see *any* language
that's fit for production use. All languages are just approximations to
that ideal, some are better, some are worse.
In other words: Lisp is too powerful and dangerous, C++ is too tricky, C
is too low-level, Java is too slow (even when compiled) and slightly too
restricted, [add your favourite language and its deficits here] - choose
your evil...

Regards,
Jo

Espen Vestre

unread,
Oct 20, 2003, 7:59:19 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Anyway, I lost interest in CLOS when I saw those clumsy BEFORE and AFTER
> keywords, and that priorization machinery for multimethods. Too
> complicated, too liberal (allowing lots of powerful things and lots of
> subtle bugs).

If you come to such a decision without even trying it out, it doesn't
mean CLOS has a problem, but rather that you have an attitude problem.

Multimethods and before/after/around-methods are among the things
that make me really happy as a lisp programmer, and with them I've
done things systems - with a few lines of code - that would have
required a complete rewrite with more limited languages.

> unsafe-but-powerful stuff. In my eyes, Lisp is a valuable
> experimentation lab for new language mechanisms, but not fit for
> production use.

Hmm. I wonder why my CLOS-infested server software keep running for
MONTHS?

> In other words: Lisp is too powerful and dangerous, C++ is too tricky, C
> is too low-level, Java is too slow (even when compiled) and slightly too
> restricted, [add your favourite language and its deficits here] - choose
> your evil...

But of all evils, Common Lisp is the least, since it gives you the
most reliable code (yes it DOES!), gives good programmmers the
opportunity to write wonderfully readable code, is wonderfully
expressive and is Great Fun to work with.

--
(espen)

Espen Vestre

unread,
Oct 20, 2003, 8:00:10 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Anyway, I lost interest in CLOS when I saw those clumsy BEFORE and AFTER
> keywords, and that priorization machinery for multimethods. Too
> complicated, too liberal (allowing lots of powerful things and lots of
> subtle bugs).

If you come to such a decision without even trying it out, it doesn't


mean CLOS has a problem, but rather that you have an attitude problem.

Multimethods and before/after/around-methods are among the things
that make me really happy as a lisp programmer, and with them I've

done things to systems - with a few lines of code - that would have


required a complete rewrite with more limited languages.

> unsafe-but-powerful stuff. In my eyes, Lisp is a valuable


> experimentation lab for new language mechanisms, but not fit for
> production use.

Hmm. I wonder why my CLOS-infested server software keep running for
MONTHS?

> In other words: Lisp is too powerful and dangerous, C++ is too tricky, C


> is too low-level, Java is too slow (even when compiled) and slightly too
> restricted, [add your favourite language and its deficits here] - choose
> your evil...

But of all evils, Common Lisp is the least, since it gives you the

james anderson

unread,
Oct 20, 2003, 8:19:24 AM10/20/03
to

Joachim Durchholz wrote:
>
> mik...@ziplip.com wrote:
>
> > [...] If you [...] think that multimethods are a BIGGER problem than
> > uni-methods, please provide a specific example (in CLOS)
>
> I can't write in CLOS, I'd make all sorts of stupid mistakes since I
> never read more than the language specs (and that's several years in the
> past).
> Anyway, I lost interest in CLOS when I saw those clumsy BEFORE and AFTER
> keywords, and that priorization machinery for multimethods. Too
> complicated, too liberal (allowing lots of powerful things and lots of
> subtle bugs).
> I might be conflating this with Scheme, though. I looked at both at
> about the same time :-)

how about formulating some examples in some language which is adequate to
express them? perhaps somewhat more concretely than the allusions in your
earlier message, in which you suggest some problem domains and some
amorphously difficult decisions, but despite several rereadings, never
concretly indicate what does not "work".

what does "different directions" mean? "glue code"? "asymmetry"? a "base
class"? a "module"? an "orthogonal extension"?

what is the distinction between "dynamic dispatch" and "parametric polymorphism".

if not in the context of clos, then, well, in english.

>
> I'd really like to see a Lisp dialect that valued reliability over raw
> expressive power. But I fear this isn't very high on the agenda of the
> Lisp community. Besides, it would be difficult to do that - Lisp offers
> no protection against peeking at internals and setting up all that
> unsafe-but-powerful stuff.

what are "internals", what is "protection"?

please at least propose some concrete examples or use cases before making
assertions to which your opening sentence does not lend much authority. it
would be nice to understand better the problem which you appear to want to
discuss, but you will need to be more thorough in your exposition.

...

> In my eyes, Lisp is a valuable
> experimentation lab for new language mechanisms, but not fit for
> production use.
> Let me add a troll-bait disclaimer:

no troll disclaimers necessary, just substance.

...

Frode Vatvedt Fjeld

unread,
Oct 20, 2003, 8:31:02 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> I'd really like to see a Lisp dialect that valued reliability over
> raw expressive power. But I fear this isn't very high on the agenda
> of the Lisp community.

From someone who just sentences ago admitted to having no practical
experience with Common Lisp, this is an amazingly misinformed and
confused statement.

> In other words: Lisp is too powerful and dangerous, [..]

I have never, ever, over the years I've been using lisp myself and
talked to others who are using it, observed any such problem.

If you are afraid of method combination, then just don't use it. I can
guarantee you that it will not jump at you from some dark cave when
you least expect it, ripping your heart out with razor-sharp
claws. Actually, it's more like a well-trained, domesticated dog: When
you tell it to play dead, you won't know it's there until you
explicitly bring it to life. The same goes for most other features you
might find intimidating.

--
Frode Vatvedt Fjeld

Marcin 'Qrczak' Kowalczyk

unread,
Oct 20, 2003, 8:46:16 AM10/20/03
to
Followup-To: comp.lang.misc

On Mon, 20 Oct 2003 13:06:08 +0200, Joachim Durchholz wrote:

>>>The longer answer: Multimethods have modularity issues (if whatever
>>>domain they're dispatching on can be extended by independent developers:
>>>different developers may extend the dispatch domain of a function in
>>>different directions, and leave undefined combinations;
>>
>> This doesn't matter until you provide an equally powerful mechanism
>> which fixes that. Which is it?
>
> I don't think there is a satisfactory one. It's a fundamental problem:
> if two people who don't know of each other can extend the same thing
> (framework, base class, whatever) in different directions, who's
> responsible for writing the code needed to combine these extensions?

Indeed. I wouldn't thus blame the language mechanism.

> 1. Let the system decide. Technically feasible for base classes (in the
> form of priorisation rules for multimethods), technically infeasible for
> frameworks. The problem here is that the system doesn't (usually) have
> enough information to reliably make the correct decision.

Sometimes the programmer can write enough default specializations that it
can be freely extended. Example: drawing shapes on devices. If every shape
is convertible to Bezier curves, and every device is capable of drawing
Bezier curves, then the most generic specialization, for arbitrary shape
and arbitrary device, will call 'draw' again with the shape converted to
Bezier curves.

The potential of multimethods is used: particular shapes have specialized
implementations for particular devices (drawing text is usually better
done more directly than through curves), separate modules can provide
additional shapes and additional devices. Yet it is safe and modular, as
long as people agree who provides a particular specialization.

It's easy to agree with a certain restriction: the specialization is
provided either by the module providing the shape or by module providing
the device. In practice the restriction doesn't have to be always followed
- it's enough that the module providing the specialization is known to all
people who might want to write their own, so I wouldn't advocate enforcing
the restriction on the language level.

I would favor multimethods even if they provided only solutions extensible
in one dimension, since they are nicer than having to enumerate all cases
in one place. Better to have a partially extensible mechanism than nothing.
Here it is extensible.

> 2. Let the system declare an error if the glue code isn't there.
> Effectively prohibits all forms of dynamic code loading. Can create risks
> in project management (unexpected error messages during code integration
> near a project deadline - yuck). Creates a temptation to hack the glue
> code up, by people who don't know the details of the two modules involved.

It would be interesting to let the system find the coverage of multimethods,
but without making it an error if not all combinations are covered. It's
useful to be able to test an incomplete program.

There is no definite answer for what kind of errors should prevent running
the program. It's similar to static/dynamic typing, or being able to
compile calls to unimplemented functions or not.

Even if the system shows that all combinations are covered, it doesn't
imply that they do the right thing. It's analogous to failing to override
a method in class-based OOP - the system doesn't know if the superclass
implementation is appropriate for the subclass. So you can't completely
rely on detection of such errors anyway.

> 3. Disallow extending in multiple directions. In other words, no
> multimethods, and live with the asymmetry. Too restricted to be
> comfortable with.

I agree.

> 4. As (3), but allow multiple extensions if they are contained within the
> same module. I.e. allow multiple dispatch within an "arithmetics" module
> that defines the classes Integer, Real, Complex, etc. etc., but don't
> allow additional multiple dispatch outside the module. (Single dispatch
> would, of course, be OK.)

For me it's still too restricted. It's a useful guideline to follow but
it should not be a hard requirement.

> 5. As (3), but require manual intervention. IOW let the two authors who
> did the orthogonal extensions know about each other, and have each module
> refer to the other, and each module carry the glue code required to
> combine with the other.

The glue code might reside in yet another module, especially if each of
the modules makes sense without the other (so it might better not depend
on it). Again, for me it's just a guideline - if one of the modules can
ensure that it's composable with the other, it's a good idea to change it -
but I would like to be able to provide the glue code elsewhere to make
them working in my program which uses both, and remove it once the modules
include the glue code themselves.

> Actually, this is the practice for various open source projects. For
> example, authors of MTAs, mail servers etc. cooperate to set standards. Of
> course, if the authors aren't interested in cooperating, this doesn't work
> well either.

The modules might also be a part of one program, where it's relatively
easy to make them cooperate. Inability to cope with some uses is generally
not a sufficient reason to reject a language mechamism which also has well
working uses.

> 6. Don't use dynamic dispatch, use parametric polymorphism (or whatever
> your language offers for that purpose, be it "generics" or "templates").

I think it can rarely solve the same problem. C++ templates (which can
use overloaded operations, i.e. with implementation dependent on type
parameters) help only in statically resolvable cases. Fully parametric
polymorphism doesn't seem to help at all even in these cases (equality,
arithmetic).

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Peter Hansen

unread,
Oct 20, 2003, 9:28:41 AM10/20/03
to
mik...@ziplip.com wrote in separate messages:

>
> Anyway, this discussion is boring me. I'll leave you to
> argue amongst yourselves. My job here is done.

..that job now proven as having been to start a language flame-fest.

"420", please stay out of comp.lang.python in the future. I also
suspect the comp.lang.functional and comp.lang.lisp people, being
reasonable beings, would wish the same for their respective groups.

-Peter

Duncan Booth

unread,
Oct 20, 2003, 10:01:59 AM10/20/03
to
mik...@ziplip.com wrote in
news:LQLWMDEMOIKDAOELJUAG...@ziplip.com:

> In each of these cases, my version uses as many or fewer syntactic
> elements (commas and parens) than yours. The problem with Python is
> that it also lacks uniformity here (think "print" and "del").

I fail to see the lack of uniformity. "print" and "del" are statements, and
like other statement (think "if", "for") they don't require parentheses.
Function calls do require parentheses because that is the way most people
who use procedural languages are accustomed to seeing function calls.

Joachim Durchholz

unread,
Oct 20, 2003, 10:15:44 AM10/20/03
to
Espen Vestre wrote:

> Joachim Durchholz <joachim....@web.de> writes:
>
>>Anyway, I lost interest in CLOS when I saw those clumsy BEFORE and AFTER
>>keywords, and that priorization machinery for multimethods. Too
>>complicated, too liberal (allowing lots of powerful things and lots of
>>subtle bugs).
>
> If you come to such a decision without even trying it out, it doesn't
> mean CLOS has a problem, but rather that you have an attitude problem.

Well, if you come to such a decision without even knowing me personally,
I think it's your attitude problem as well ;-)

Actually, my experiences with Lisp come from several pre-Scheme
encounters. They include a rather poor (by today's standards) dialect
called Interlisp, which worked OK but was a little restricted, and a
Lisp machine that offered no protection whatsoever for the system
internals and, consequently, crashed in ways that make even Windows ME
look like the incarnation of stability.
When I looked at Scheme and CLOS, I found lots of new added mechanism,
but no attempts at fixing the old problems. (Which, probably, wasn't
even on the agenda. Maybe it's just me who sees problems.)

> Multimethods and before/after/around-methods are among the things
> that make me really happy as a lisp programmer, and with them I've
> done things to systems - with a few lines of code - that would have
> required a complete rewrite with more limited languages.

before/after/around don't offer anything that a simple call to an
ancestor method wouldn't offer, and with less linguistic requirements.
It looks as if somebody reimplemented Simula features without thinking
about available improvements.
Unless, of course, Lisp before/after/around semantics is different than
a simple super/precursor/whatever call; in which case a more visible
warning in the descriptions of before/after/around semantics would have
been in order...

Multimethods, on the other hand, are indeed powerful, but they are also
dangerous. Just like GOTO - you can use it to make code better, but
often enough it's making it worse. Multimethods are just a case where
problems disguise themselves as coding errors - looking at the
sophisticated user-definable machinery for selecting the right method
during multimethod dispatch, it seems like these problems indeed showed
up, and were "solved" by adding further baroqueness to the language. To
the point that reading the source code of a function will not reveal
what's actually happening, because some quirk in multimethod resolution
strategy may select entirely different subfunctions than those that the
reader would have expected.
From a software maintenance perspective, this is pure disaster.

>>unsafe-but-powerful stuff. In my eyes, Lisp is a valuable
>>experimentation lab for new language mechanisms, but not fit for
>>production use.
>
> Hmm. I wonder why my CLOS-infested server software keep running for
> MONTHS?

Maybe because you're one of those above-average bright guys?

And, maybe, because you're not working in a team of a dozen or more
people, so you know exactly what combinations of types are safe to use
with a multimethod, and which aren't? And if a problem indeed shows up,
you don't attribute this to multimethods per se, but to some stupid
coding error, and you simply fix the problem? (C programmers don't see C
as a dangerous language, they just consider race conditions and buffer
overruns as stupid programming mistakes as well... deficits of a
language are easier to see if you take an outside perspective.)

>>In other words: Lisp is too powerful and dangerous, C++ is too tricky, C
>>is too low-level, Java is too slow (even when compiled) and slightly too
>>restricted, [add your favourite language and its deficits here] - choose
>>your evil...
>
> But of all evils, Common Lisp is the least, since it gives you the
> most reliable code (yes it DOES!), gives good programmmers the
> opportunity to write wonderfully readable code, is wonderfully
> expressive and is Great Fun to work with.

In practice, most programmers aren't great, they are average. Assuming a
halfways sane distribution, 50% of all programmers are even /below/
average - and their services are still very much in need.
How should they get their work done?
Educating them isn't an option - if that were a possibility, it would
have long been done.

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 10:25:03 AM10/20/03
to
Frode Vatvedt Fjeld wrote:

> If you are afraid of method combination, then just don't use it. I can
> guarantee you that it will not jump at you from some dark cave when
> you least expect it, ripping your heart out with razor-sharp
> claws.

Unless you're maintaining code written by others.
I don't know what's the norm in the Lisp community, but I spend about
80% of my time reading and modifying legacy code. If a language offers a
dark corner, I'm sure I'll hit it more often than I want.
The bad thing about such dark corners is: if you try to clean the mess
up, you'll invariably break things. After a few such mishaps, you don't
even try to mess with that code. Avoiding messes will, after a few
maintenance cycles, produce a true mess, until the entire system is
thrown away and rewritten from scratch, in a different language, with
different dark corners. Which means that, in a decade from now, when the
original authors are gone, the same cycle will start.
Not my idea of professional software development. No sir.

> Actually, it's more like a well-trained, domesticated dog: When
> you tell it to play dead, you won't know it's there until you
> explicitly bring it to life. The same goes for most other features you
> might find intimidating.

Hey, that's the first time anybody said I found a concept intimidating!

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 10:20:59 AM10/20/03
to
james anderson wrote:

> Joachim Durchholz wrote:
>
> how about formulating some examples in some language which is adequate to
> express them? perhaps somewhat more concretely than the allusions in your
> earlier message, in which you suggest some problem domains and some
> amorphously difficult decisions, but despite several rereadings, never
> concretly indicate what does not "work".
>
> what does "different directions" mean? "glue code"? "asymmetry"? a "base
> class"? a "module"? an "orthogonal extension"?
>
> what is the distinction between "dynamic dispatch" and "parametric polymorphism".
>
> if not in the context of clos, then, well, in english.

Sorry - this would go beyond the scope of a newsgroup discussion. It
would take me several hours to get this all sorted out, written down,
and worded so that it's generally understandable.
And, frankly, I already have spent too much time on this thread.

I do intend to writ it all up and publish it on a WWW site - in my
copious spare time... :-(

Let me assure you that all these nebulous terms are due to time
constraints, not due to fuzzy reasoning.

Sorry if this all sounds like a lame excuse (actually it is).
And sorry to leave you with lots of fuzzy allusions and no concrete
data. Others may be willing to fill in more details.

>>I'd really like to see a Lisp dialect that valued reliability over raw
>>expressive power. But I fear this isn't very high on the agenda of the
>>Lisp community. Besides, it would be difficult to do that - Lisp offers
>>no protection against peeking at internals and setting up all that
>>unsafe-but-powerful stuff.
>
> what are "internals", what is "protection"?

No way to define an opaque type. AFAIK, modern Lisps allow user-defined
types, but they offer no way to protect them against inspecting their
internals. I'd prefer to have at least a grain of information hiding...

Regards,
Jo

Wojtek Walczak

unread,
Oct 20, 2003, 10:25:21 AM10/20/03
to
Dnia Sun, 19 Oct 2003 04:18:31 -0700 (PDT), mik...@ziplip.com napisał(a):
> THE GOOD:
[...]
> THE BAD:
[...]

Well, in the variety of languages and plenty of conceptions you can search
for your language of choice. Because all the things you mentioned in
"THE BAD" are available in other languages it doesn't mean it should also
exist in Python. Languages are different, just as people are. If you find
Python has more cons than pros it means that this is not a language from
which you can take 100% of fun. Anyway, changing it into next haskell,
smalltalk or ruby has no sense. Python fills certain niche and it does
its job as it should. Differences are necessity, so don't waste your time
on talks about making Python similar to something else.

--
[ Wojtek Walczak - gminick (at) underground.org.pl ]
[ <http://gminick.linuxsecurity.pl/> ]
[ "...rozmaite zwroty, matowe od patyny dawnosci." ]

james anderson

unread,
Oct 20, 2003, 10:56:40 AM10/20/03
to

that was a bizarre post.

the author begins by describing problematic experiences with a dialect of lisp
which, by his own qualification was "poor (by today's standards)," then he
continues with a characterization of the behaviour of method qualification,
which characterization he then deprecates in the next sentence. then, after
having indicated that he may well not have acquaited himself with the language
documentation, he proceeds to express skepticism about how much one can infer
from immediate method definitions.

it is evident that he does have direct experience with these issues in broader
terms, which makes it a shame that he perfers to conjecture about the
performance of the lower 50% rather than to state the issues concretely.

...

Joachim Durchholz wrote:
>
> ...

james anderson

unread,
Oct 20, 2003, 10:58:53 AM10/20/03
to

Joachim Durchholz wrote:


>
> james anderson wrote:
>
>
> No way to define an opaque type. AFAIK, modern Lisps allow user-defined
> types, but they offer no way to protect them against inspecting their
> internals. I'd prefer to have at least a grain of information hiding...

unless the writer is willing to me more specific, i suggest that the most
significant term in the above paragraph is "AFAIK".

>
> Regards,
> Jo

Joe Marshall

unread,
Oct 20, 2003, 10:52:27 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> before/after/around don't offer anything that a simple call to an
> ancestor method wouldn't offer, and with less linguistic
> requirements. It looks as if somebody reimplemented Simula features
> without thinking about available improvements.
> Unless, of course, Lisp before/after/around semantics is different
> than a simple super/precursor/whatever call; in which case a more
> visible warning in the descriptions of before/after/around semantics
> would have been in order...

They are. Using a simple super/precursor/whatever call delegates
responsibility to the subclass that rightly belongs to the superclass.

Frode Vatvedt Fjeld

unread,
Oct 20, 2003, 10:54:04 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Unless you're maintaining code written by others. I don't know
> what's the norm in the Lisp community, but I spend about 80% of my
> time reading and modifying legacy code. If a language offers a dark

> corner, I'm sure I'll hit it more often than I want. [..]

But this is just like being afraid of the dark! You have made it quite
clear that you know next to nothing about method combination, and that
you have made false assumptions about it, based on which you conclude
that there are "dark corners". There is no dark corner here. Method
combination is a feature that is relatively infrequently used, but can
sometimes provide wonderful, readable and maintainable solutions that
have no parallel in other languages that I know of. Using superficial
aspects of its syntax as a reason to disregard the language, is
downright absurd.

--
Frode Vatvedt Fjeld

Espen Vestre

unread,
Oct 20, 2003, 11:02:40 AM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Actually, my experiences with Lisp come from several pre-Scheme
> encounters. They include a rather poor (by today's standards) dialect
> called Interlisp, which worked OK but was a little restricted, and a
> Lisp machine that offered no protection whatsoever for the system
> internals and, consequently, crashed in ways that make even Windows ME
> look like the incarnation of stability.

Probably, you have tried, as I did, to use Xerox Lisp Machines in
standalone mode. They were much more stable when run with a server (I
tried that too, at Xerox PARC). Their standalone user base wasn't
really big enough, and the developers used them with a server (this
isn't something I'm just guessing, a PARC guy admitted having similar
problems on his standalone 1100-series machine at home).

(Anyway, even using them standalone was better than Windows ME wrt.
stability)

> before/after/around don't offer anything that a simple call to an
> ancestor method wouldn't offer, and with less linguistic
> requirements.

Yes they do. E.g. :around-methods wrap _around_ the primary method.
If you want the most specific method to be called first, you can
always use ordinary ancestor methods (call-next-method).

> And, maybe, because you're not working in a team of a dozen or more
> people, so you know exactly what combinations of types are safe to use
> with a multimethod, and which aren't?

Not a dozen, but 5 people are still maintaining and expanding the
system I was the main author of 5 years ago (and I left them almost
2 years ago).

I agree that multimethods must be used with care. But so must many
constructs, even in languages that try hard to put a straight jacket
on their users.

> stupid coding error, and you simply fix the problem? (C programmers
> don't see C as a dangerous language, they just consider race
> conditions and buffer overruns as stupid programming mistakes as
> well... deficits of a language are easier to see if you take an
> outside perspective.)

Sorry, but I don't think the multimethod/buffer overrun analogy is a
fair one (except if you can come up with at least one real life
example of multi-method-hell at work).

> In practice, most programmers aren't great, they are average. Assuming
> a halfways sane distribution, 50% of all programmers are even /below/
> average - and their services are still very much in need.
> How should they get their work done?
> Educating them isn't an option - if that were a possibility, it would
> have long been done.

The problem with this attitude, is that below-average programmers
usually advance, through the Dilbert Principle, to pointy-haired
positions, and if there is this idea that "everybody must use the same
programming language, and it must be a simple one" floating around as
a principle, you can be sure that they grab it.

Another point: Do you have any _substantial_ reasons for claiming that
mediocre (or worse) programmers _really_ work better with any of the
more mainstream languages than they would do with Common Lisp (if we
for a moment disregard the most simple 'library-assembling-
programming' in VB or java where CL obviously currently has a drawback
by supporting fewer libraries)?

Yet another point: Inside every substantially advanced program,
there's a lisp trying to get out. _My_ idea (this is just guessing, I
admit), is that large CL systems are _easier_ to maintain, even for
Joe Notsobright, than large complicated systems written in other
languages, where all the fancy mechanisms have been reinvented (had to
be reinvented!) in much more obscures way to make the program work.
--
(espen)

Alex Martelli

unread,
Oct 20, 2003, 11:15:34 AM10/20/03
to
Pascal Costanza wrote:
...

> So how do you implement an equality operator correctly with only single
> dynamic dispatch?

Equality is easy, as it's commutative -- pseudocode for it might be:

def operator==(a, b):
try: return a.__eq__(b)
except I_Have_No_Idea:
try: return b.__eq__(a)
except I_Have_No_Idea:
return False

Non-commutative operators require a tad more, e.g. Python lets each
type define both an __add__ and a __radd__ (rightwise-add):

def operator+(a, b):
try: return a.__add__(b)
except (I_Have_No_Idea, AttributeError):
try: return b.__radd__(a)
except (I_Have_No_Idea, AttributeError):
raise TypeError, "can't add %r and %r" % (type(a),type(b))

Multimethods really shine in HARDER problems, e.g., when you have
MORE than just two operands (or, perhaps, some _very_ complicated
inheritance structure -- but in such cases, even multimethods are
admittedly no panacea). Python's pow(a, b, c) is an example --
and, indeed, Python does NOT let you overload THAT (3-operand)
version, only the two-operand one that you can spell pow(a, b)
or a**b.


ALex

Jon S. Anthony

unread,
Oct 20, 2003, 11:32:01 AM10/20/03
to
mik...@ziplip.com writes:

...

Do you Xah?

/Jon

Ingvar Mattsson

unread,
Oct 20, 2003, 11:19:14 AM10/20/03
to
[ Follow-up redirected to comp.lang.lisp ]
Joachim Durchholz <joachim....@web.de> writes:

> james anderson wrote:
>
> > Joachim Durchholz wrote:

[SNIP]


> >>I'd really like to see a Lisp dialect that valued reliability over raw
> >>expressive power. But I fear this isn't very high on the agenda of the
> >>Lisp community. Besides, it would be difficult to do that - Lisp offers
> >>no protection against peeking at internals and setting up all that
> >>unsafe-but-powerful stuff.
> > what are "internals", what is "protection"?
>
> No way to define an opaque type. AFAIK, modern Lisps allow
> user-defined types, but they offer no way to protect them against
> inspecting their internals. I'd prefer to have at least a grain of
> information hiding...

Well... Using INSPECT one can "look inside" a type, but one has almost
the same (possibly the same) amount of introspection in (say) Python,
using dir() (and very handy it is, not as good as reading
documentation, but for those "I need to check this now" moments, it
can often be sufficient).

One should, as usual, follow one's introspective moments by checking
The Reference(s), to see if what one saw, introspecting, is
"supported" or "unsupported" (if the latter, one can bet things will
break in interesting ways, either "soon" or "on next upgarde"). But,
at times, that can be the only way to use a third-party module taht
*almost* fits one's needs.

I saw a horror-story of someone having to retro-fit third-part C++
modules by clever run-time mangling of things, to get it to work
Right, instead of "it does almost what the documentation says, but not
quite and fixing it is faster than getting a patch or rolling it
ourselves". I think it was in a thread in comp.lang.lisp, a few years
back, regarding why it was a *bad* thing forbidding access to
non-exported sympols from a package. Not exporting something is a good
sign taht one shouldn't touch it and well-behaved programmers
won't. They *may* file a bug saying "this would be handy if it was
supported", but they should, in general, not go ahead and use it.

//Ingvar
--
Coffee: Nectar of gods / Aesir-mead of sysadmins / Elixir of life

Greg Menke

unread,
Oct 20, 2003, 11:28:41 AM10/20/03
to

Joachim Durchholz <joachim....@web.de> writes:
> Frode Vatvedt Fjeld wrote:
>
> > If you are afraid of method combination, then just don't use it. I can
> > guarantee you that it will not jump at you from some dark cave when
> > you least expect it, ripping your heart out with razor-sharp
> > claws.
>
> Unless you're maintaining code written by others.
> I don't know what's the norm in the Lisp community, but I spend about
> 80% of my time reading and modifying legacy code. If a language offers
> a dark corner, I'm sure I'll hit it more often than I want.
> The bad thing about such dark corners is: if you try to clean the mess
> up, you'll invariably break things. After a few such mishaps, you
> don't even try to mess with that code. Avoiding messes will, after a
> few maintenance cycles, produce a true mess, until the entire system
> is thrown away and rewritten from scratch, in a different language,
> with different dark corners. Which means that, in a decade from now,
> when the original authors are gone, the same cycle will start.
> Not my idea of professional software development. No sir.

You'll find the same problems with any large project in any language.

Maintainable, Complex, Inexpensive

Choose any 2 and the 3rd is where you'll pay for it.


Gregm

Pascal Costanza

unread,
Oct 20, 2003, 11:35:38 AM10/20/03
to
Joachim Durchholz wrote:

> Multimethods, on the other hand, are indeed powerful, but they are also
> dangerous.

Life is dangerous.

> Just like GOTO - you can use it to make code better, but
> often enough it's making it worse.

How often?

> Multimethods are just a case where
> problems disguise themselves as coding errors - looking at the
> sophisticated user-definable machinery for selecting the right method
> during multimethod dispatch, it seems like these problems indeed showed
> up, and were "solved" by adding further baroqueness to the language. To
> the point that reading the source code of a function will not reveal
> what's actually happening, because some quirk in multimethod resolution
> strategy may select entirely different subfunctions than those that the
> reader would have expected.
> From a software maintenance perspective, this is pure disaster.

Is this based on actual experience, or are you just guessing?

> In practice, most programmers aren't great, they are average. Assuming a
> halfways sane distribution, 50% of all programmers are even /below/
> average - and their services are still very much in need.
> How should they get their work done?
> Educating them isn't an option - if that were a possibility, it would
> have long been done.

No, because people are already educated under the assumption that they
are not bright. This assumption is very deeply rooted in our society,
but I don't see any evidence that it has actually improved anything. To
the contrary, it seems to me that people stay "average" _because_ they
are treated this way. IMHO it's very cynical to assume that other people
are less bright than oneself.

Did it ever occur to you that learning a language designed for experts
can make you a better programmer?


Pascal

Markus Mottl

unread,
Oct 20, 2003, 11:46:47 AM10/20/03
to
In comp.lang.functional Kenny Tilton <kti...@nyc.rr.com> wrote:
> Dennis Lee Bieber wrote:
>> Short version: The software performed correctly, to specification
>> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
>> DESIGNED.

> Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

Dennis is right: it was indeed a specification problem. AFAIK, the coder
had actually even proved formally that the exception could not arise
with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
exceptions. The default behaviour of the system was to abort the mission
for safety reasons by blasting the rocket. This wasn't justified in this
case, but one is always more clever after the event...

> "supposed to" fail? chya.

Indeed. Values this extreme were considered impossible on Ariane 4 and
taken as indication of such a serious failure that it would justify
aborting the mission.

> This was nothing more than an unhandled exception crashing the sytem
> and its identical backup.

Depends on what you mean by "crash": it certainly didn't segfault. It
just realized that something happened that wasn't supposed to happen
and reacted AS REQUIRED.

> Other conversions were protected so they could handle things
> intelligently, this bad boy went unguarded.

Bad, indeed, but absolutely safe with regard to the spec of Ariane 4.

> Note also that the code functionality was pre-ignition
> only, so there is no way they were thinking that a cool way to abort the
> flight would be to leave a program exception unhandled.

This is a serious design error, not a problem of the programming language.

> What happened (aside from an unnecessary chunk of code running
> increasing risk to no good end)

Again, it's a design error.

> is that the extra power of the A5 caused
> oscillations greater than those seen in the A4. Those greater
> oscillations took the 64-bit float beyond what would fit in the 16-bit
> int. kablam. Operand Error. This is not a system saying "whoa, out of
> range, abort".

Well, the system was indeed programmed to say "whoa, out of range, abort".
A design error.

> As for Lisp not helping:

There is basically no difference between checking the type of a value
dynamically for validity and catching exceptions that get raised on
violations of certain constraints. One can forget to do both or react
to those events in a stupid way (or prove in both cases that the check /
exception handling is unnecessary given the spec).

Note that I am not defending ADA in any way or arguing against FPLs: in
fact, being an FPL-advocate myself I do think that FPLs (including Lisp)
have an edge what concerns writing safe code. But the Ariane-example just
doesn't support this claim. It was an absolutely horrible management
mistake to not check old code for compliance with the new spec. End
of story...

Regards,
Markus Mottl

--
Markus Mottl http://www.oefai.at/~markus mar...@oefai.at

Kenny Tilton

unread,
Oct 20, 2003, 12:09:43 PM10/20/03
to

Fergus Henderson wrote:

> Kenny Tilton <kti...@nyc.rr.com> writes:
>
>
>>Dennis Lee Bieber wrote:
>>
>>
>>> Just check the archives for comp.lang.ada and Ariane-5.


>>>
>>> Short version: The software performed correctly, to specification
>>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
>>>DESIGNED.
>>
>>Nonsense.
>
>

> No, that is exactly right. Like the man said, read the archives for
> comp.lang.ada.

Yep, I was wrong. They /did/ handle the overflow by leaving the
operation unguarded, trusting it to eventually bring down the system,
their design goal. Apologies to Dennis.

>
>
>>From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>>
>>"The internal SRI software exception was caused during execution of a
>>data conversion from 64-bit floating point to 16-bit signed integer
>>value. The floating point number which was converted had a value greater
>>than what could be represented by a 16-bit signed integer. This resulted
>>in an Operand Error. The data conversion instructions (in Ada code) were
>>not protected from causing an Operand Error, although other conversions
>>of comparable variables in the same place in the code were protected.
>>The error occurred in a part of the software that only performs
>>alignment of the strap-down inertial platform. This software module
>>computes meaningful results only before lift-off. As soon as the
>>launcher lifts off, this function serves no purpose."
>
>

> That's all true, but it is only part of the story, and selectively quoting
> just that part is misleading in this context.

I quoted the entire paragraph and it seemed conclusive, so I did not
read the rest of the report. ie, I was not being selective, I just
assumed no one would consider crashing to be a form of error-handling.
My mistake, they did.

Well, the original question was, "Would Lisp have helped?". Let's see.
They dutifully went looking for overflowable conversions and decided
what to do with each, deciding in this case to do something appropriate
for the A4 which was inappropriately allowed by management to go into
the A5 unexamined.

In Lisp, well, there are two cases. Did they have to dump a number into
a 16-bit hardware channel? There was some reason for the conversion. If
not, no Operand Error arises. It is an open question whether they decide
to check anyway for large values and abort if found, but this one arose
only during a sweep of all such conversions, so probably not.

But suppose they did have to dance to the 16-bit tune of some hardware
blackbox. they would go thru the same reasoning and decide to shut down
the system. No advantage to Lisp. But they'd have to do some work to
bring the system down, because there would be no overflow. So:

(define-condition e-hardware-broken (e-pre-ignition e-fatal)
((component-id :initarg :component-id :reader component-id)
(bad-value :initarg :bad-value :intiform nil :reader bad-value)
...etc etc...

And then they would have to kick it off, and the exception handler of
the controlling logic would get a look at the condition on the way out.
Of course, it also sees operand errors, so one can only hope that at
some point during testing they for some reason had /some/ condition of
type e-pre-ignition get trapped by the in-flight supervisor, at which
point someone would have said either throw it away or why is that module
still running?

Or, if they were as meticulous with their handlers as they were with
numeric conversions, they would have during the inventory of explicit
conditions to handle gotten to the pre-ignition module conditions and
decided, "what does that software (which should not even be running)
know about the hardware that the rest of the system does not know?".

The case is not so strong now, but the odds are still better with Lisp.

kenny


--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Alex Martelli

unread,
Oct 20, 2003, 12:12:31 PM10/20/03
to
Thomas F. Burdick wrote:
...
> So do the Python masses get to use multimethods?

Sure! Check out http://codespeak.net/ : pypy is aggressively open-source,
and both the masses and the elites get to download and reuse all they want.


> (with-lisp-trolling
> And have you seen the asymptote yet, or do you need to grow macros
> first?)

We felt absolutely no need to tweak Python's syntax in the least in
order to implement multi-methods, so, no need for macros (including
Armin Rigo, who, I think, does have extensive experience using CL).

"The asymptote" of pypy is Python -- an implementation more flexible
than the current C and Java ones, giving better optimization (Armin
is convinced he can easily surpass his own psyco, that way), ease of
fine-grained subsetting (building tiny runtimes for cellphones &c),
and also, no doubt, ease of play and experimentation (oops, we'd
better say "Research", it sounds way more dignified doesn't it!).

Macros are definitely not part of our current plans. But, hey, this
is just a summary: visit http://codespeak.net/ and see for yourself --
everything is spelled out in great detail, we have no secrets. Get
a subversion client and download everything, check out all of the
mailing lists' archives -- have a ball. Anybody who wants to play
along is welcome to join any of our "sprints" for a week or so of
nearly-nonstop heavy duty pair-programming -- "nearly" because we
generally manage to schedule a barbecue, picnic, beer-bash, or other
suchlike outing (and a lot of fruitful design discussion takes place
during that scheduled break, in my observation). Between sprints,
mailing lists, wikis, IRC and the like keep the fires going. Indeed,
the social aspects of the pypy experience manage to be almost more
fascinating than the technical ones, which IS saying something (and
reinforces my beliefs about programming being first and foremost an
issue of social interaction, but that's another thread:-).

Ah, yeah, one sad thing for non-Europeans -- pypy's very much a
European thing -- everybody's welcome, but you'll have a hard time
convincing us to schedule a sprint elsewhere (each participant pays
his or her own travel costs, you see...). Still, codespeak.net
does give free access to all material anyway, wherever you are:-).

[ducking back out of c.l.lisp...:-)]


Alex

Kenny Tilton

unread,
Oct 20, 2003, 12:49:25 PM10/20/03
to

Joachim Durchholz wrote:


> I'd really like to see a Lisp dialect that valued reliability over raw
> expressive power. But I fear this isn't very high on the agenda of the
> Lisp community.

Reliability does not have to come from a strait-jacket language. Here is
a C++ Fanatic converted to Python, who gets reliability from test-driven
development: http://www.artima.com/weblogs/viewpost.jsp?thread=4639

> Let me add a troll-bait disclaimer: Actually I don't see *any* language
> that's fit for production use.

I have used CL for a huge application, using CLOS heavily. The first
couple of months had me refactoring quite a bit as I searched for
personal guiding principles, esp. with regards to multiple inheritance.
On a prior substantial app, I worked out my personal prefs with regards
to the model-view thing.

In the end I have my own rules which /limit/ the ways I use CLOS. These
self-imposed constraints tame the potentially wild beast. The only power
lost is the power to tie myself in knots.

OO /is/ a slightly different paradigm, and CLOS does have a ton of
expressive power. It is also approachable, so it is easy to just dive in
and start winging code around. But there is no reason to think one will
not have to train a few neurons to get /fluent/ in something so substantial.

Pascal Bourguignon

unread,
Oct 20, 2003, 1:03:10 PM10/20/03
to

Fergus Henderson <f...@cs.mu.oz.au> writes:
> <http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.

The post at that url writes about the culture of the Ariane team, but
I would say that it's even a more fundamental problem of our culture
in general: we build brittle stuff with very little margin for error.
Granted, it would be costly to increase physical margin, but in this
case, adopting a point of view more like _robotics_ could help. Even
in case of hardware failure, there's no reason to shut down the mind;
just go on with what you have.


--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(

Kenny Tilton

unread,
Oct 20, 2003, 1:04:46 PM10/20/03
to

Markus Mottl wrote:

> In comp.lang.functional Kenny Tilton <kti...@nyc.rr.com> wrote:
>
>>Dennis Lee Bieber wrote:
>>
>>> Short version: The software performed correctly, to specification
>>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
>>>DESIGNED.
>
>
>>Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>
>
> Dennis is right: it was indeed a specification problem. AFAIK, the coder
> had actually even proved formally that the exception could not arise
> with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
> exceptions. The default behaviour of the system was to abort the mission
> for safety reasons by blasting the rocket. This wasn't justified in this
> case, but one is always more clever after the event...
>
>
>>"supposed to" fail? chya.
>
>
> Indeed. Values this extreme were considered impossible on Ariane 4 and
> taken as indication of such a serious failure that it would justify
> aborting the mission.

Yes, I have acknowledged in another post that I was completely wrong in
my guesswork: everything was intentional and signed-off on by many.

A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.

Steve Schafer

unread,
Oct 20, 2003, 1:42:22 PM10/20/03
to
On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
<sp...@thalassa.informatimago.com> wrote:

>Even in case of hardware failure, there's no reason to shut down the
>mind; just go on with what you have.

When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.

-Steve

Terry Reedy

unread,
Oct 20, 2003, 1:55:21 PM10/20/03
to

"Markus Mottl" <mar...@oefai.at> wrote in message
news:bn1017$m39$1...@bird.wu-wien.ac.at...

> Note that I am not defending ADA in any way or arguing against FPLs:
in
> fact, being an FPL-advocate myself I do think that FPLs (including
Lisp)
> have an edge what concerns writing safe code. But the Ariane-example
just
> doesn't support this claim. It was an absolutely horrible management
> mistake to not check old code for compliance with the new spec. End
> of story...

The investigating commission reported about 5 errors that, in series,
allowed the disaster. As I remember, another nonprogrammer/language
one was in mockup testing. The particular black box, known to be
'good', was not included, but just simulated according to its expected
behavior. If it has been included, and a flight similated in real
time with appropriate tilting and shaking, it should probably have
given the spurious abort message that it did in the real flight.

TJR


Erann Gat

unread,
Oct 20, 2003, 1:17:24 PM10/20/03
to
In article <OAUkb.12491$pT1....@twister.nyc.rr.com>, Kenny Tilton
<kti...@nyc.rr.com> wrote:

[Discussing the Arianne failure]

> A small side-note: as I now understand things, the idea was not to abort
> the mission, but to bring down the system. The thinking was that the
> error would signify a hardware failure, and with any luck shutting down
> would mean either loss of the backup system (if that was where the HW
> fault occurred) or correctly falling back on the still-functioning
> backup system if the supposed HW fault had been in the primary unit. ie,
> an HW fault would likely be isolated to one unit.

That's right. This is why hardware folks spend a lot of time thinking
about common mode failures, and why software folks could learn a thing or
two from the hardware folks in this regard.

E.

Joachim Durchholz

unread,
Oct 20, 2003, 2:28:04 PM10/20/03
to
Pascal Bourguignon wrote:
> The post at that url writes about the culture of the Ariane team, but
> I would say that it's even a more fundamental problem of our culture
> in general: we build brittle stuff with very little margin for error.
> Granted, it would be costly to increase physical margin,

Which is exactly why the margin is kept as small as possible.
Occasionally, it will be /too/ small.

Anybody seen a car model series, every one working perfectly from the
first one?
From what I read, every new model has its small quirks and
"near-perfect" gotchas. The difference is just that you're not allowed
to do that in expensive things like rockets (which is, among many other
things, one of the reasons why space vehicles and aircraft are so d*mn
expensive: if something goes wrong, you can't just drive them on the
nearest parking lot and wait for maintenance and repair...)

> but in this
> case, adopting a point of view more like _robotics_ could help. Even
> in case of hardware failure, there's no reason to shut down the mind;
> just go on with what you have.

As Steve wrote, letting a rocket carry on regardless isn't a good idea
in the general case: it would be a major disaster if it made it to the
next coast and crashed into the next town. Heck, it would be enough if
the fuel tanks leaked, and the whole fuel rained down on a ship
somewhere in the Atlantic - most rocket fuels are toxic.

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 2:41:57 PM10/20/03
to
Espen Vestre wrote:

> Joachim Durchholz <joachim....@web.de> writes:
>
>> Actually, my experiences with Lisp come from several pre-Scheme
>> encounters. They include a rather poor (by today's standards)
>> dialect called Interlisp, which worked OK but was a little
>> restricted, and a Lisp machine that offered no protection
>> whatsoever for the system internals and, consequently, crashed in
>> ways that make even Windows ME look like the incarnation of
>> stability.
>
> Probably, you have tried, as I did, to use Xerox Lisp Machines in
> standalone mode.

They were not Xerox, but something similar.
That was about the late 1980ies, I think - Lisp machines being on or
slightly after the peak of their hype.
It might indeed have been a server thing though - there was some backup
server in the background. I still found it unacceptable if I could bring
the machine down on its knees by inadvertently changing some system
function...
Well, that's long past. The real issues are different.

> (Anyway, even using them standalone was better than Windows ME wrt.
> stability)

Maybe. I'm pretty sure it wasn't Xerox, but something else.
I tried to forget the whole unwholesome episode as quickly as possible :-)

>> before/after/around don't offer anything that a simple call to an
>> ancestor method wouldn't offer, and with less linguistic
>> requirements.
>
> Yes they do. E.g. :around-methods wrap _around_ the primary method.
> If you want the most specific method to be called first, you can
> always use ordinary ancestor methods (call-next-method).

No, I just want to use the implementation that happens to be useful for
my subclass.

> I agree that multimethods must be used with care. But so must many
> constructs, even in languages that try hard to put a straight jacket
> on their users.

Hey, straightjackets aren't enough, you need ball and chains! ;-)

Seriously, I see that multimethods can solve problems. It's just that I
have seen so many constructs that were later replaced by slightly less
powerful and much safer constructs.

>> stupid coding error, and you simply fix the problem? (C programmers
>> don't see C as a dangerous language, they just consider race
>> conditions and buffer overruns as stupid programming mistakes as
>> well... deficits of a language are easier to see if you take an
>> outside perspective.)
>
> Sorry, but I don't think the multimethod/buffer overrun analogy is a
> fair one (except if you can come up with at least one real life
> example of multi-method-hell at work).

Too detailed for now.

>> In practice, most programmers aren't great, they are average.
>> Assuming a halfways sane distribution, 50% of all programmers are
>> even /below/ average - and their services are still very much in
>> need. How should they get their work done? Educating them isn't an
>> option - if that were a possibility, it would have long been done.
>
> The problem with this attitude, is that below-average programmers
> usually advance, through the Dilbert Principle, to pointy-haired
> positions, and if there is this idea that "everybody must use the
> same programming language, and it must be a simple one" floating
> around as a principle, you can be sure that they grab it.

Hey, but that's sensible. The less languages you have in shop, the less
problems you have reassigning people between tasks. And every constraint
removed if making life easier - there are already enough constraints to
keep satisfied.

I don't pretend I'm happy with that. I just mean that I can understand
the PHB reasoning at work, and I don't know any good alternatives.
Welcome to real life - TANSTAAFL.

> Another point: Do you have any _substantial_ reasons for claiming
> that mediocre (or worse) programmers _really_ work better with any of
> the more mainstream languages than they would do with Common Lisp (if
> we for a moment disregard the most simple 'library-assembling-
> programming' in VB or java where CL obviously currently has a
> drawback by supporting fewer libraries)?

They certainly would work better with a language with a "more standard"
syntax ("Lots of Irritating Superfluous Parentheses" and all).
I know it's something that people learn to "see through", but it's
certainly massively irritating getting used to it.

I did some programming in Lisp, but I never fully got rid of that
parenthesis paranoia... maybe a full year of part-time Lisp programming
isn't enough.
Though that's already too long for a commercial project. Job
satisfaction is an important factor, and forcing developers to adopt to
many parentheses is just a needless irritation (from a boss's
perspective, Lispers will do fine with them of course).

> Yet another point: Inside every substantially advanced program,
> there's a lisp trying to get out.

I agree with that, though one could replace "Lisp" with other language
names.

> _My_ idea (this is just guessing, I admit), is that large CL systems
> are _easier_ to maintain, even for Joe Notsobright, than large
> complicated systems written in other languages, where all the fancy
> mechanisms have been reinvented (had to be reinvented!) in much more
> obscures way to make the program work.

I have seen similar claims for almost any language "with a mission",
such as Smalltalk and Eiffel. Which happens to be the languages that I
dug into well enough to see such statements - I'm pretty sure that many
other languages claim this as well.

I'd be interested in any hard facts about such issues. (I know that they
are difficult if not impossible to come by. I'm just thinking wishfully,
and aloud...)

Regards,
Jo

Joachim Durchholz

unread,
Oct 20, 2003, 2:55:23 PM10/20/03
to
Pascal Costanza wrote:

> Joachim Durchholz wrote:
>
>> Multimethods, on the other hand, are indeed powerful, but they are
>> also dangerous.
>
> Life is dangerous.

OK - I agree that danger is a factor only if better alternatives are
available.

>> Multimethods are just a case where problems disguise themselves as
>> coding errors - looking at the sophisticated user-definable machinery
>> for selecting the right method during multimethod dispatch, it seems
>> like these problems indeed showed up, and were "solved" by adding
>> further baroqueness to the language. To the point that reading the
>> source code of a function will not reveal what's actually happening,
>> because some quirk in multimethod resolution strategy may select
>> entirely different subfunctions than those that the reader would have
>> expected.
>> From a software maintenance perspective, this is pure disaster.
>
> Is this based on actual experience, or are you just guessing?

Guessing.
Educated guessing though.
I admit that practical experience would be better. But my time budget is
limited, so I have to rely on guesswork. (Like most programmers.)

However, I think that the problems will usually be attributed to the
wrong reasons. Most people don't look past immediate causes of their
software failures (that's why I gave the C buffer overflow problem - not
because C buffers are similar to multimethods, they aren't, but to
demonstrate the mode of thinking that attributes common problems to
other places than their real roots).

Of course, if multimethods are used rarely, the problems will be rare.

>> In practice, most programmers aren't great, they are average. Assuming
>> a halfways sane distribution, 50% of all programmers are even /below/
>> average - and their services are still very much in need.
>> How should they get their work done?
>> Educating them isn't an option - if that were a possibility, it would
>> have long been done.
>
> No, because people are already educated under the assumption that they
> are not bright. This assumption is very deeply rooted in our society,
> but I don't see any evidence that it has actually improved anything.

No, but there's nothing to improve here.

> To
> the contrary, it seems to me that people stay "average" _because_ they
> are treated this way.

Only in some cases. The majority of people slackens down and refuses to
learn after some point in life. Some do this at an age of 20 years,
others wait until they are 60, and some stay interested and alert for
their entire life - but they are a minority.
Heck, I myself feel the temptation to ease down...

> IMHO it's very cynical to assume that other people
> are less bright than oneself.

It's not brightness. It's willingness to learn.
Besides, I never said I'm brighter than others - I'm most certainly not.
I do have a knack for system design, which is offset by deficits in many
other areas (which are irrelevant to technical newsgroups, so they don't
stick out).

> Did it ever occur to you that learning a language designed for experts
> can make you a better programmer?

Learning /any/ language with a yet-unknown paradigm will make you a
better programmer. Actually I learned a lot from (in chronological
order) Lisp, Prolog, Smalltalk, Eiffel, and Haskell.
This doesn't mean that I think that I'd want to use all these languages.

Regards,
Jo

Espen Vestre

unread,
Oct 20, 2003, 3:33:32 PM10/20/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Maybe. I'm pretty sure it wasn't Xerox, but something else.

Perhaps a "Siemens lisp machine"? (which was a Xerox with a Siemens
sticket on it :-))

> > Yes they do. E.g. :around-methods wrap _around_ the primary method.
> > If you want the most specific method to be called first, you can
> > always use ordinary ancestor methods (call-next-method).
>
> No, I just want to use the implementation that happens to be useful for
> my subclass.

What a funny comment, I don't think you understand how this works.
Try to do some CLOS programming, maybe you'll like it!

> I don't pretend I'm happy with that. I just mean that I can understand
> the PHB reasoning at work, and I don't know any good alternatives.
> Welcome to real life - TANSTAAFL.

Who said anything about a free lunch? And why do you think you need
to welcome me to real life? I know quite a deal about badly organized
companies and how the Dilbert principle works in real life.

> Though that's already too long for a commercial project. Job
> satisfaction is an important factor, and forcing developers to adopt to
> many parentheses is just a needless irritation (from a boss's
> perspective, Lispers will do fine with them of course).

I have seen programmers adopt very quickly to lisp syntax, I don't
know why you had a problem with it. Maybe you had bad instructors
or a bad programming environment.

> > Yet another point: Inside every substantially advanced program,
> > there's a lisp trying to get out.
>
> I agree with that, though one could replace "Lisp" with other language
> names.

No, you can't.
--
(espen)

Pascal Bourguignon

unread,
Oct 20, 2003, 4:08:30 PM10/20/03
to
Steve Schafer <s...@reply.to.header> writes:

> On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
> <sp...@thalassa.informatimago.com> wrote:
>
> >Even in case of hardware failure, there's no reason to shut down the
> >mind; just go on with what you have.
>
> When the thing that failed is a very large rocket having a very large
> momentum, and containing a very large amount of very volatile fuel, it
> makes sense to give up and shut down in the safest possible way.

You have to define a "dangerous" situation. Remember that this
"safest possible way" is usually to blow the rocket up. AFAIK, while
this parameter was out of range, there was no instability and the
rocket was not uncontrolable.

> Also keep in mind that this was a "can't possibly happen" failure
> scenario. If you've deemed that it is something that can't possibly
> happen, you are necessarily admitting that you have no idea how to
> respond in a meaningful way if it somehow does happen.

My point. This "can't possibly happen" failure did happen, so clearly
it was not a "can't possibly happen" physically, which means that the
problem was with the software. We know it, but what I'm saying is that
a smarter software could have deduced it on fly.

We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.

Garry Hodgson

unread,
Oct 20, 2003, 4:30:16 PM10/20/03
to

ssia

Tim Sweeney

unread,
Oct 20, 2003, 4:52:14 PM10/20/03
to
> THE GOOD:
> THE BAD:
>
> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
> 90% of the code is function applictions. Why not make it convenient?
>
> 9. Syntax for arrays is also bad [a (b c d) e f] would be better
> than [a, b(c,d), e, f]

Agreed with your analysis, except for these two items.

#1 is a matter of opinion, but in general:

- f(x,y) is the standard set by mathematical notation and all the
mainstream programming language families, and is library neutral:
calling a curried function is f(x)(y), while calling an uncurried
function is f(x,y).

- "f x y" is unique to the Haskell and LISP families of languages, and
implies that most library functions are curried. Otherwise you have a
weird asymmetry between curried calls "f x y" and uncurried calls
which translate back to "f(x,y)". Widespread use of currying can lead
to weird error messages when calling functions of many parameters: a
missing third parameter in a call like f(x,y) is easy to report, while
with curried notation, "f x y" is still valid, yet results in a type
other than what you were expecting, moving the error up the AST to a
less useful obvious.

I think #9 is inconsistent with #1.

In general, I'm wary of notations like "f x" that use whitespace as an
operator (see http://www.research.att.com/~bs/whitespace98.pdf).

Marcin 'Qrczak' Kowalczyk

unread,
Oct 20, 2003, 5:35:55 PM10/20/03
to
On Mon, 20 Oct 2003 13:52:14 -0700, Tim Sweeney wrote:

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried.

No, Lisp doesn't curry. It really writes "(f x y)", which is different
from "((f x) y)" (which is actually Scheme, not Lisp).

In fact the syntax "f x y" without mandatory parens fits non-lispish
non-curried syntaxes too. The space doesn't have to be left- or
right-associative; it just binds all arguments at once, and this
expression is different both from "f (x y)" and "(f x) y".

The only glitch is that you have to express application to 0 arguments
somehow. If you use "f()", you can't use "()" as an expression (for
empty tuple for example). But when you accept it, it works. It's my
favorite function application syntax.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Andrew Dalke

unread,
Oct 20, 2003, 6:07:31 PM10/20/03
to
Pascal Bourguignon:

> We all agree that it would be better to have a perfect world and
> perfect, bug-free, software. But since that's not the case, I'm
> saying that instead of having software that behaves like simple unix C
> tools, where as soon as there is an unexpected situation, it calls
> perror() and exit(), it would be better to have smarter software that
> can try and handle UNEXPECTED error situations, including its own
> bugs. I would feel safer in an AI rocket.

Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered? How would you implement it?
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)

I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)

The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.

Andrew
da...@dalkescientific.com


Tomasz Zielonka

unread,
Oct 20, 2003, 6:41:44 PM10/20/03
to
Kenny Tilton wrote:
>
> Reliability does not have to come from a strait-jacket language. Here is
> a C++ Fanatic converted to Python, who gets reliability from test-driven
> development: http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Come on, C++ is an underspecified language with loose semantics (for
compatibility and efficiency reasons).

How often do you accidentally introduce ''undefined/unspecified
behaviour'' in your Python programs? How often do you get segfaults in
Python which are caused by your mistake, not a bug in interpreter?

Best regards,
Tom

--
.signature: Too many levels of symbolic links

Steve Schafer

unread,
Oct 20, 2003, 8:27:34 PM10/20/03
to
On 20 Oct 2003 22:08:30 +0200, Pascal Bourguignon
<sp...@thalassa.informatimago.com> wrote:

>AFAIK, while this parameter was out of range, there was no instability
>and the rocket was not uncontrolable.

That's perfectly true, but also perfectly irrelevant. When your
carefully designed software has just told you that your rocket, which,
you may recall, is traveling at several thousand metres per second, has
just entered a "can't possibly happen" state, you don't exactly have a
lot of time in which to analyze all of the conflicting information and
decide which to trust and which not to trust. Whether that sort of
decision-making is done by engineers on the ground or by human pilots or
by some as yet undesigned intelligent flight control system, the answer
is the same: Do the safe thing first, and then try to figure out what
happened.

All well-posed problems have boundary conditions, and the solutions to
those problems are bounded as well. No matter what the problem or its
means of solution, a boundary is there, and if you somehow cross that
boundary, you're toast. In particular, the difficulty with AI systems is
that while they can certainly enlarge the boundary, they also tend to
make it fuzzier and less predictable, which means that testing becomes
much less reliable. There are numerous examples where human operators
have done the "sensible" thing, with catastrophic consequences.

>My point.

Well, actually, no. I assure you that my point is very different from
yours.

>This "can't possibly happen" failure did happen, so clearly it was not
>a "can't possibly happen" physically, which means that the problem was
>with the software.

No, it still was a "can't possibly happen" scenario, from the point of
view of the designed solution. And there was nothing wrong with the
software. The difficulty arose because the solution for one problem was
applied to a different problem (i.e., the boundary was crossed).

>it would be better to have smarter software that can try and handle
>UNEXPECTED error situations

I think you're failing to grasp the enormity of the concept of "can't
possibly happen." There's a big difference between merely "unexpected"
and "can't possibly happen." "Unexpected" most often means that you
haven't sufficiently analyzed the situation. "Can't possibly happen," on
the other hand, means that you've analyzed the situation and determined
that the scenario is outside the realm of physical or logical
possibility. There is simply no meaningful means of recovery from a
"can't possibly happen" scenario. No matter how smart your software is,
there will be "can't possibly happen" scenarios outside the boundary,
and your software is going to have to shut down.

>I would feel safer in an AI rocket.

What frightens me most is that I know that there are engineers working
on safety-critical systems that feel the same way. By all means, make
your flight control system as sophisticated and intelligent as you want,
but don't forget to include a simple, reliable, dumber-than-dirt
ejection system that "can't possibly fail" when the "can't possibly
happen" scenario happens.

Let me try to summarize the philosophical differences here: First of
all, I wholeheartedly agree that a more sophisticated software system
_may_ have prevented the destruction of the rocket. Even so, I think the
likelihood of that is rather small. (For some insight into why I think
so, you might want to take a look at Henry Petroski's _To Engineer is
Human_.) Where we differ is how much impact we believe that more
sophisticated software would have on the problem. I get the impression
that you believe that an AI-based system would drastically reduce
(perhaps even eliminate?) the "can't possibly happen" scenario. I, on
the other hand, believe that even the most sophisticated system enlarges
the boundary of the solution space by only a very small amount--the area
occupied by "can't possibly happen" scenarios remains far greater than
that occupied by "software works correctly and saves the rocket"
scenarios.

-Steve

Bill Anderson

unread,
Oct 20, 2003, 8:37:58 PM10/20/03
to
On Mon, 20 Oct 2003 19:33:32 +0000, Espen Vestre wrote:

> Joachim Durchholz <joachim....@web.de> writes:
> ...


>> > Yet another point: Inside every substantially advanced program,
>> > there's a lisp trying to get out.
>>
>> I agree with that, though one could replace "Lisp" with other language
>> names.
>
> No, you can't.

True, for within every substanialy advanced Python program is a lumberjack
trying to get out.

/BA

Matthew Danish

unread,
Oct 20, 2003, 9:30:26 PM10/20/03
to
On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote:
> > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
> > 90% of the code is function applictions. Why not make it convenient?
> >
> > 9. Syntax for arrays is also bad [a (b c d) e f] would be better
> > than [a, b(c,d), e, f]
> #1 is a matter of opinion, but in general:
>
> - f(x,y) is the standard set by mathematical notation and all the
> mainstream programming language families, and is library neutral:
> calling a curried function is f(x)(y), while calling an uncurried
> function is f(x,y).

And lambda notation is: \xy.yx or something like that. Math notation is
rather ad-hoc, designed for shorthand scribbling on paper, and in
general a bad idea to imitate for programming languages which are
written on the computer in an ASCII editor (which is one thing which
bothers me about ML and Haskell).

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried. Otherwise you have a
> weird asymmetry between curried calls "f x y" and uncurried calls
> which translate back to "f(x,y)".

Here's an "aha" moment for you:

In Haskell and ML, the two biggest languages with built-in syntactic
support for currying, there is also a datatype called a tuple (which is
a record with positional fields). All functions, in fact, only take a
single argument. The trick is that the syntax for tuples and the syntax
for currying combine to form the syntax for function calling:

f (x, y, z) ==> calling f with a tuple (x, y, z)
f x (y, z) ==> calling f with x, and then calling the result with (y, z).

This, I think, is a win for a functional language. However, in a
not-so-functionally-oriented language such as Lisp, this gets in the way
of flexible parameter-list parsing, and doesn't provide that much value.
In Lisp, a form's meaning is determined by its first element, hence (f x
y) has a meaning determined by F (whether it is a macro, or functionally
bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by
name) arguments, and ways to obtain the arguments as a list.

"f x y", to Lisp, is just three separate forms (all symbols).

> Widespread use of currying can lead
> to weird error messages when calling functions of many parameters: a
> missing third parameter in a call like f(x,y) is easy to report, while
> with curried notation, "f x y" is still valid, yet results in a type
> other than what you were expecting, moving the error up the AST to a
> less useful obvious.

Nah, it should still be able to report the line number correctly.
Though I freely admit that the error messages spat out of compilers like
SML/NJ are not so wonderful.

> I think #9 is inconsistent with #1.

I think that if the parser recognizes that it is directly within a [ ]
form, it can figure out that these are not function calls but rather
elements, though it would require that function calls be wrapped in (
)'s now. And the grammar would be made much more complicated I think.

Personally, I prefer (list a (b c d) e f).

> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

Hmm, rather curious paper. I never really though of "f x" using
whitespace as an operator--it's a delimiter in the strict sense. The
grammar of ML and Haskell define that consecutive expressions form a
function application. Lisp certainly uses whitespace as a simple
delimiter. I'm not a big fan of required commas because it gets
annoying when you are editting large tables or function calls with many
parameters. The behavior of Emacs's C-M-t or M-t is not terribly good
with extraneous characters like those, though it does try.

--
; Matthew Danish <mda...@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."

Thomas F. Burdick

unread,
Oct 20, 2003, 10:11:48 PM10/20/03
to
Matthew Danish <mda...@andrew.cmu.edu> writes:

It's true that (f x y) and "f x y" don't use whitespace as an
operator; however, I attempted something sneaky once, trying to get
lisp used via a custom reader that did use whitespace as an operator
(for the record, it worked until someone figured out what was going
on, then they were pissed, for no rational reason). Its real use used
all domain-specific functions, but some example code that you can read
with SNEAKY:READ :

let (list list 1, 2, 3;;
times 3)
{
dotimes (x, times)
{ format (t, "x is ~S", x);
print list;
}
}

It's all s-expressions, but they look like:

f x, y, z;
or
f (x, y, z);
or
(sexp, sexp, sexp ...)
or
f x, y, {sexp; sexp; ...}
or
f x {sexp; sexp; ...}

It can look remarkably non-lispy, but once one catches on that it's
just a lot of ways of expressing where lists start and end, one can
figure out what's happening pretty quickly.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Michael Geary

unread,
Oct 20, 2003, 10:27:49 PM10/20/03
to
> > In general, I'm wary of notations like "f x" that use whitespace as an
> > operator (see http://www.research.att.com/~bs/whitespace98.pdf).

> Hmm, rather curious paper. I never really though of "f x" using
> whitespace as an operator--it's a delimiter in the strict sense. The
> grammar of ML and Haskell define that consecutive expressions form a
> function application. Lisp certainly uses whitespace as a simple

> delimiter...

Did you read the cited paper *all the way to the end*?

-Mike


Pascal Bourguignon

unread,
Oct 20, 2003, 11:56:11 PM10/20/03
to
"Andrew Dalke" <ada...@mindspring.com> writes:

> Pascal Bourguignon:
> > We all agree that it would be better to have a perfect world and
> > perfect, bug-free, software. But since that's not the case, I'm
> > saying that instead of having software that behaves like simple unix C
> > tools, where as soon as there is an unexpected situation, it calls
> > perror() and exit(), it would be better to have smarter software that
> > can try and handle UNEXPECTED error situations, including its own
> > bugs. I would feel safer in an AI rocket.
>
> Since it was written in Ada and not C, and since it properly raised
> an exception at that point (as originally designed), which wasn't
> caught at a recoverable point, ending up in the default "better blow
> up than kill people" handler ... what would your AI rocket have
> done with that exception? How does it decide that an UNEXPECTED
> error situation can be recovered?

By having a view at the big picture!

The blow up action would be activated only when the big picture shows
that the AI has no control of the rocket and that it is going down.


> How would you implement it?

Like any AI.

> How would you test it? (Note that the above software wasn't
> tested under realistic conditions; I assume in part because of cost.)

In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.


> I agree it would be better to have software which can do that.
> I have no good idea of how that's done. (And bear in mind that
> my XEmacs session dies about once a year, eg, once when NFS
> was acting flaky underneath it and a couple times because it
> couldn't handle something X threw at it. ;)

XEmacs is not AI.



> The best examples of resilent architectures I've seen come from
> genetic algorithms and other sorts of feedback training; eg,
> subsumptive architectures for robotics and evolvable hardware.
> There was a great article in CACM on programming an FPGA
> via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
> I recall) but pointed out the hard part about this approach is
> that it's hard to understand, and the result used various defects
> on the chip (part of the circuit wasn't used but the chip wouldn't
> work without it) which makes the result harder to mass produce.
>
> Andrew
> da...@dalkescientific.com

In any case, you're right, the main problem may be that it was
specified to blow up when an unhandled exception was raised...

Andrew Dalke

unread,
Oct 21, 2003, 12:41:07 AM10/21/03
to
Me:

> > How would you test it? (Note that the above software wasn't
> > tested under realistic conditions; I assume in part because of cost.)

Pascal Bourguignon:


> In a simulator. In any case, the point is to have a software that is
> able to handle even unexpected failures.

Like I said, the existing code was not tested in a simulator. Why
do you think some AI code *would* be tested for this same case?
(Actually, I believe that an AI would need to be trained in a
simulator, just like humans, but that it would require so much
testing as to preclude its use, for now, in rocket control systems.)

Nor have you given any sort of guideline on how to implement
this sort of AI in the first place. Without it, you've just restated
the dream of many people over the last few centuries. It's a
dream I would like to see happen, which is why I agreed with you.

> > couldn't handle something X threw at it. ;)

> XEmacs is not AI

Yup, which is why the smiley is there. You said that C was
not the language to use (cf your perror/exit comment) and implied
that Ada wasn't either, so I assumed you had a more resiliant
programming language in mind. My response was to point
out that Emacs Lisp also crashes (rarely) given unexpected
errors and so imply that Lisp is not the answer.

Truely I believe that programming languages as we know
them are not the (direct) solution, hence my pointers to
evolvable hardware and similar techniques.

Even then, we still have a long way to go before they
can be used to control a rocket. They require a lot of
training (just like people) and software simulators just
won't cut it. The first "AI"s will replace those things
we find simple and commonplace [*] (because our brain
evolved to handle it), and not hard and rare.

Andrew
da...@dalkescientific.com
[*]
In thinking of some examples, I remembered a passage in
on of Cordwainer Smith's stories. In them, dogs, cats,
eagles, cows, and many other animals were artifically
endowed with intelligence and a human-like shape.
Turtles were bred for tasks which required long patience.
For example, one turtle was assigned the task of standing
by a door in case there was trouble, which he did for
100 years, without complaint.


Matthew Danish

unread,
Oct 21, 2003, 3:42:55 AM10/21/03
to

Why bother? It says "April 1" in the Abstract, and got boring about 2
paragraphs later. I should have scare-quoted "operator" above, or
rather the lack of one, which is interpreted as meaning function
application.

Joachim Durchholz

unread,
Oct 21, 2003, 6:31:54 AM10/21/03
to
Pascal Bourguignon wrote:
> AFAIK, while this parameter was out of range, there was no
> instability and the rocket was not uncontrolable.

Actually, the rocket had started correcting its orientation according to
the bogus data, which resulted in uncontrollable turning. The rocket
would have broken into parts in an uncontrollable manner, so it was
blewn up.
(The human operator decided to press the emergency self-destruct button
seconds before the control software would have initiated self destruct.)

> My point. This "can't possibly happen" failure did happen, so
> clearly it was not a "can't possibly happen" physically, which means
> that the problem was with the software. We know it, but what I'm
> saying is that a smarter software could have deduced it on fly.

No. The smartest software will not save you from human error. It was a
specification error.
The only way to detect this error (apart from more testing) would have
been to model the physics of the rocket, in software, and either verify
the flight control software against the rocket model or to test run the
whole thing in software. (I guess neither of these options would have
been cheaper than the simple test runs that were deliberately omitted,
probably on the grounds of "we /know/ it works, it worked in the Ariane 4".)

> We all agree that it would be better to have a perfect world
> and perfect, bug-free, software. But since that's not the case,
> I'm saying that instead of having software that behaves like simple
> unix C tools, where as soon as there is an unexpected situation,
> it calls perror() and exit(), it would be better to have smarter
> software that can try and handle UNEXPECTED error situations,
> including its own bugs. I would feel safer in an AI rocket.

This all may be true, but you're solving problems that didn't cause the
Ariane crash.

Regards,
Jo

Joachim Durchholz

unread,
Oct 21, 2003, 6:39:23 AM10/21/03
to
Espen Vestre wrote:

> Joachim Durchholz <joachim....@web.de> writes:
>
>> Maybe. I'm pretty sure it wasn't Xerox, but something else.
>
> Perhaps a "Siemens lisp machine"? (which was a Xerox with a Siemens
> sticket on it :-))

No, it had some US label on it - but my recollections are getting quite
dim here, I fear I'm producing random noise instead of information on
that topic.
It's all too long in the past :-)

(Actually, the mainframe that we were working on was a truly
international machine: the outside stickers said "Siemens", the inside
stickers said "Fujitsu", and the manuals said "IBM". That was before
Siemens bought Fujitsu.)

>> Though that's already too long for a commercial project. Job
>> satisfaction is an important factor, and forcing developers to
>> adopt to many parentheses is just a needless irritation (from a
>> boss's perspective, Lispers will do fine with them of course).
>
> I have seen programmers adopt very quickly to lisp syntax, I don't
> know why you had a problem with it. Maybe you had bad instructors or
> a bad programming environment.

No, your sample is biased: your everyday acquaintances are those who,
sooner or later, made that transition. Those who don't will vanish from
your surroundings, sooner or later.
(Admittedly, I'm guessing here. But this sounds reasonable, at least for
me.)

Anyway. Whether the PHBs' reasoning is valid or not, it will prevent
that Lisp will enter the mainstream in the foreseeable future.

>>> Yet another point: Inside every substantially advanced program,
>>> there's a lisp trying to get out.
>>
>> I agree with that, though one could replace "Lisp" with other
>> language names.
>
> No, you can't.

Well, I have seen other names in similar quotes...

Regards,
Jo

Joachim Durchholz

unread,
Oct 21, 2003, 6:58:54 AM10/21/03
to
Tim Sweeney wrote:
>>
>>1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
>> 90% of the code is function applictions. Why not make it convenient?
>>
>>9. Syntax for arrays is also bad [a (b c d) e f] would be better
>> than [a, b(c,d), e, f]
>
> Agreed with your analysis, except for these two items.
>
> #1 is a matter of opinion, but in general:
>
> - f(x,y) is the standard set by mathematical notation and all the
> mainstream programming language families, and is library neutral:
> calling a curried function is f(x)(y), while calling an uncurried
> function is f(x,y).

Well, in most languages, curried functions are the standard.
This has some syntactic advantages, in areas that go beyond mathematical
tradition. (Since each branch of mathematics has its own traditions,
it's probably possible to find a branch where the functional programming
way of writing functions is indeed tradition *g*)

> - "f x y" is unique to the Haskell and LISP families of languages, and
> implies that most library functions are curried.

No, Lisp languages require parentheses around the call, i.e.
(f x y)
Lisp does share the trait that it doesn't need commas.

> Otherwise you have a
> weird asymmetry between curried calls "f x y" and uncurried calls
> which translate back to "f(x,y)".

It's not an asymmetry. "f x y" is a function of two parameters.
"f (x, y)" is a function of a single parameter, which is an ordered pair.
In most cases such a difference is irrelevant, but there are cases where
it isn't.

> Widespread use of currying can lead
> to weird error messages when calling functions of many parameters: a
> missing third parameter in a call like f(x,y) is easy to report, while
> with curried notation, "f x y" is still valid, yet results in a type
> other than what you were expecting, moving the error up the AST to a
> less useful obvious.

That's right.
On the other hand, it makes it easy to write code that just fills the
first parameter of a function, and returns the result. Such code is so
commonplace that having weird error messages is considered a small price
to pay.
Actually, writing functional code is more about sticking together
functions than actually calling them. With such use, having to write
code like
f (x, ...)
instead of
f x
will gain in precision, but it will clutter up the code so much that I'd
exptect the gain in readability to be little, nonexistent or even negative.
It might be interesting to transform real-life code to a more standard
syntax and see whether my expectation indeed holds.

> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

That was an April Fool's joke. A particularly clever one: the paper
starts by laying a marginally reasonable groundwork, only to advance
into realms of absurdity later on.
It would be unreasonable to make whitespace an operator in C++. This
doesn't mean that a language with a syntax designed for whitespace
cannot be reasonable, and in fact some languages do that, with good
effect. Reading Haskell code is like a fresh breeze, since you don't
have to mentally filter out all that syntactic noise.
The downside is that it's easy to get some detail wrong. One example is
a decision (was that Python?) to equate a tab with eight blanks, which
tends to mess up syntactic structure when editing the code with
over-eager editors. There are some other lessons to learn - but then,
whitespace-as-syntactic-element is a relatively new concept, and people
are still playing with it and trying out alternatives. The idea in
itself is useful, its incarnations aren't perfect (yet).

Regards,
Jo

Espen Vestre

unread,
Oct 21, 2003, 7:53:22 AM10/21/03
to
Joachim Durchholz <joachim....@web.de> writes:

> No, your sample is biased: your everyday acquaintances are those who,
> sooner or later, made that transition. Those who don't will vanish from
> your surroundings, sooner or later.
> (Admittedly, I'm guessing here. But this sounds reasonable, at least for
> me.)

You guess a lot, and not very educated.
--
(espen)

Jerzy Karczmarczuk

unread,
Oct 21, 2003, 9:01:39 AM10/21/03
to

WILL YOU STOP THAT PLEASE?!!!

I hate when people read and quote ONLY what they want to see and quote.
I remind you that JoD answered the following remark:

> I have seen programmers adopt very quickly to lisp syntax, I don't know why
> you had a problem with it. Maybe you had bad instructors or
> a bad programming environment.

I would keep my mouth shut if the author of the citation above was somebody
else, not Espen Vestre Himself. Peculiar context for mentioning the word
"education"...

==

There is a movie where the hero, a little boy tells Bruce Willis that people
with whom he communicates see only things they want to see.

Those people are DEAD.

Be careful Monsieur Espen Vestre.

===========

Anyway, all this proves that cross-newsgroups postings are calamitous. I removed
the Pythonistas from *this one*, they seem innocent, until the next abuse...

Jerzy Karczmarczuk

Espen Vestre

unread,
Oct 21, 2003, 10:12:36 AM10/21/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> WILL YOU STOP THAT PLEASE?!!!

Will you please stop shouting?

> I hate when people read and quote ONLY what they want to see and quote.

If you followed the whole thread, you would have seen that most of the
discussion was caused by Jochen making completely false claims about
CLOS, which was based on making wrong _guesses_ about the behaviour
from looking at the specs.

My somewhat rude comment must be seen in that context, and I should of
course rather have shut my mouth.

> There is a movie where the hero, a little boy tells Bruce Willis that people
> with whom he communicates see only things they want to see.
>
> Those people are DEAD.
>
> Be careful Monsieur Espen Vestre.

What on earth are you insinuating? This is getting ridiculous.

--
(espen)

Espen Vestre

unread,
Oct 21, 2003, 10:36:50 AM10/21/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> WILL YOU STOP THAT PLEASE?!!!

Will you please stop shouting?

> I hate when people read and quote ONLY what they want to see and quote.

If you followed the whole thread, you would have seen that most of the
discussion was caused by Joachim making completely false claims about


CLOS, which was based on making wrong _guesses_ about the behaviour
from looking at the specs.

My somewhat rude comment must be seen in that context, and I should of
course rather have shut my mouth.

> There is a movie where the hero, a little boy tells Bruce Willis that people


> with whom he communicates see only things they want to see.
>
> Those people are DEAD.
>
> Be careful Monsieur Espen Vestre.

What on earth are you insinuating? This is getting ridiculous.

--
(espen)

Frode Vatvedt Fjeld

unread,
Oct 21, 2003, 11:13:30 AM10/21/03
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:

> [..] the EXISTING call to foo() will NOT be "affected" by the "del
> foo" that happens right in the middle of it, since there is no
> further attempt to look up the name "foo" in the rest of that call's
> progress. [..]

What this and my other investigations amount to, is that in Python a
"name" is somewhat like a lisp symbol [1]. In particluar, it is an
object that has a pre-computed hash-key, which is why
hash-table/dictionary lookups are reasonably efficient. My worry was
that the actual string hash-key would have to be computed at every
function call, which I believe would slow down the process some 10-100
times. I'm happy to hear it is not so.

[1] One major difference being that Pyhon names are not first-class
objects. This is a big mistake wrt. to supporting interactive
programming in my personal opinion.

> As for your worries elsewhere expressed that name lookup may impose
> excessive overhead, in Python we like to MEASURE performance issues
> rather than just reason about them "abstractly"; which is why Python
> comes with a handy timeit.py script to time a code snippet
> accurately. [...]

Thank you for the detailed information. Still, I'm sure you will agree
that sometimes reasoning about things can provide insight with
predictive powers that you cannot achieve by mere experimentation.

--
Frode Vatvedt Fjeld

Terry Reedy

unread,
Oct 21, 2003, 1:44:42 PM10/21/03
to

"Frode Vatvedt Fjeld" <fro...@cs.uit.no> wrote in message
news:2hk76yl...@vserver.cs.uit.no...

> What this and my other investigations amount to, is that in Python a
> "name" is somewhat like a lisp symbol [1].

This is true in that names are bound to objects rather than
representing a block of memory.

>In particluar, it is an object that has a pre-computed hash-key,

NO. There is no name type. 'Name' is a grammatical category, with
particular syntax rules, for Python code, just like 'expression',
'statement' and many others.

A name *may* be represented at runtime as a string, as CPython
*sometimes* does. The implementation *may*, for efficiency, give
strings a hidden hash value attributre, which CPython does.

For even faster runtime 'name lookup' an implementation may represent
names
as slot numbers (indexes) for a hiddem, non-Python array. CPython
does this (with C pointer arrays) for function locals whenever the
list of locals is fixed at compile time, which is usually. (To
prevent this optimization, add to a function body something like 'from
mymod import *', if still allowed, that makes the number of locals
unknowable until runtime.)

To learn about generated bytecodes, read the dis module docs and use
dis.dis.
For example:
>>> import dis
>>> def f(a):
... b=a+1
...
>>> dis.dis(f)
0 SET_LINENO 1

3 SET_LINENO 2
6 LOAD_FAST 0 (a)
9 LOAD_CONST 1 (1)
12 BINARY_ADD
13 STORE_FAST 1 (b)
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
This says: load (onto stack) first pointer in local_vars array and
second pointer in local-constants array, add referenced values and
replace operand pointers with pointer to result, store that result
pointer in the second slot of local_vars, load first constant pointer
(always to None), and return.

Who knows what *we* do when we read, parse, and possibly execute
Python code.

Terry J. Reedy


Frode Vatvedt Fjeld

unread,
Oct 21, 2003, 2:31:15 PM10/21/03
to
"Terry Reedy" <tjr...@udel.edu> writes:

> [..] For even faster runtime 'name lookup' an implementation may


> represent names as slot numbers (indexes) for a hiddem, non-Python
> array. CPython does this (with C pointer arrays) for function
> locals whenever the list of locals is fixed at compile time, which
> is usually. (To prevent this optimization, add to a function body
> something like 'from mymod import *', if still allowed, that makes

> the number of locals unknowable until runtime.) [..]

This certainly does not ease my worries over Pythons abilities with
respect to interactivity and dynamism.

--
Frode Vatvedt Fjeld

Alex Martelli

unread,
Oct 21, 2003, 6:51:57 PM10/21/03
to
Frode Vatvedt Fjeld wrote:
...

>> excessive overhead, in Python we like to MEASURE performance issues
>> rather than just reason about them "abstractly"; which is why Python
>> comes with a handy timeit.py script to time a code snippet
>> accurately. [...]
>
> Thank you for the detailed information. Still, I'm sure you will agree
> that sometimes reasoning about things can provide insight with
> predictive powers that you cannot achieve by mere experimentation.

A few centuries ago, a compatriot of mine was threatened with
torture, and backed off, because he had dared state that "all
science comes from experience" -- he refuted the "reasoning
about things" by MEASURING (and fudging the numbers, if the
chi square tests about his reports about the sloping-plane
experiments are right -- but then, Italians _are_ notoriously
untrustworthy, even though sometimes geniuses;-).

These days, I'd hope not to be threatened with torture if I assert:
"reasoning" is cheap, that's its advantage -- it can lead you to
advance predictive hypotheses much faster than mere "data
mining" through masses of data might yield them. But those
hypotheses are very dubious until you've MEASURED what they
predict. If you don't (or can't) measure, you don't _really KNOW_;
you just _OPINE_ (reasonably or not, justifiably or not, etc). One
independently repeatable measurement trumps a thousand clever
reasonings, when that measurement gives numbers contradicting
the reasonings' predictions -- that one number sends you back to
the drawing board.

Or, at least, that's how we humble engineers see the world...


Alex

Pascal Bourguignon

unread,
Oct 21, 2003, 7:53:22 PM10/21/03
to
"Andrew Dalke" <ada...@mindspring.com> writes:
> [...]

> Nor have you given any sort of guideline on how to implement
> this sort of AI in the first place. Without it, you've just restated
> the dream of many people over the last few centuries. It's a
> dream I would like to see happen, which is why I agreed with you.
> [...]

> Truely I believe that programming languages as we know
> them are not the (direct) solution, hence my pointers to
> evolvable hardware and similar techniques.

You're right, I did not answer. I think that what is missing in
classic software, and that ought to be present in AI software, is some
introspective control: having a process checking that the other
processes are live and progressing, and able to act to correct any
infinite loop, break down or dead-lock. Some hardware may help in
controling this controling software, like on the latest Macintosh:
they automatically restart when the system is hung. And purely at the
hardware level, for a real life system, you can't rely on only one
processor.

--
__Pascal_Bourguignon__
http://www.informatimago.com/

Pascal Bourguignon

unread,
Oct 21, 2003, 8:04:45 PM10/21/03
to

t...@epicgames.com (Tim Sweeney) writes:
> In general, I'm wary of notations like "f x" that use whitespace as an
> operator (see http://www.research.att.com/~bs/whitespace98.pdf).

The \\ comment successor is GREAT!

--
__Pascal_Bourguignon__
http://www.informatimago.com/

John Atwood

unread,
Oct 21, 2003, 8:25:01 PM10/21/03
to
Andrew Dalke <ada...@mindspring.com> wrote:

>The best examples of resilent architectures I've seen come from
>genetic algorithms and other sorts of feedback training; eg,
>subsumptive architectures for robotics and evolvable hardware.
>There was a great article in CACM on programming an FPGA
>via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
>I recall) but pointed out the hard part about this approach is
>that it's hard to understand, and the result used various defects
>on the chip (part of the circuit wasn't used but the chip wouldn't
>work without it) which makes the result harder to mass produce.

something along these lines?
http://www.cogs.susx.ac.uk/users/adrianth/cacm99/node3.html


John

Marshall Spight

unread,
Oct 22, 2003, 11:27:42 AM10/22/03
to
"Scott McIntire" <mcintire_c...@comcast.net> wrote in message news:MoEkb.821534$YN5.832338@sccrnsc01...
> It seems to me that the Agency would have fared better if they just used
> Lisp - which has bignums - and relied more on regression suites and less on
> the belief that static type checking systems would save the day.

I find that an odd conclusion. Given that the cost of bugs is so high
(especially in the cited case) I don't see a good reason for discarding
*anything* that leads to better correctness. Yes, bignums is a good
idea: overflow bugs in this day and age are as bad as C-style buffer
overruns. Why work with a language that allows them when there
are languages that don't?

But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?


Marshall


Pascal Costanza

unread,
Oct 22, 2003, 11:37:26 AM10/22/03
to

...because static type systems work by reducing the expressive power of
a language. It can't be any different for a strict static type system.
You can't solve the halting problem in a general-purpose language.

This means that eventually you might need to work around language
restrictions, and this introduces new potential sources for bugs.

(Now you could argue that current sophisticated type systems cover 90%
of all cases and that this is good enough, but then I would ask you for
empirical studies that back this claim. ;)

I think soft typing is a good compromise, because it is a mere add-on to
an otherwise dynamically typed language, and it allows programmers to
override the decisions of the static type system when they know better.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Garry Hodgson

unread,
Oct 22, 2003, 11:43:05 AM10/22/03
to
Pascal Bourguignon <sp...@thalassa.informatimago.com> wrote:

> You're right, I did not answer. I think that what is missing in
> classic software, and that ought to be present in AI software, is some
> introspective control: having a process checking that the other
> processes are live and progressing, and able to act to correct any
> infinite loop, break down or dead-lock.

so assume this AI software was running on Ariane 5, and the same
condition occurs. based on the previously referenced design
assumptions, it is told that there's been a hardware failure, and that
numerical calculations can no longer be trusted. how does it cope
with this?

> Some hardware may help in
> controling this controling software, like on the latest Macintosh:
> they automatically restart when the system is hung.

in this case, a restart would cause the same calculations to occur,
and the same failure to be reported.

> And purely at the
> hardware level, for a real life system, you can't rely on only one
> processor.

absolutely right. though, in this case, this wouldn't have helped either.

the fatal error was a process error, and it occurred long before launch.

----
Garry Hodgson, Technology Consultant, AT&T Labs

Be happy for this moment.
This moment is your life.

William Lovas

unread,
Oct 22, 2003, 2:27:56 PM10/22/03
to
In article <bn687n$l6u$1...@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote:

> Marshall Spight wrote:
>> But why should more regression testing mean less static type checking?
>> Both are useful. Both catch bugs. Why ditch one for the other?
>
> ...because static type systems work by reducing the expressive power of
> a language. It can't be any different for a strict static type system.
> You can't solve the halting problem in a general-purpose language.

What do you mean by "reducing the expressive power of the language"? There
are many general purpose statically typed programming languages that are
Turing complete, so it's not a theoretical consideration, as you allude.

> This means that eventually you might need to work around language
> restrictions, and this introduces new potential sources for bugs.
>
> (Now you could argue that current sophisticated type systems cover 90%
> of all cases and that this is good enough, but then I would ask you for
> empirical studies that back this claim. ;)

Empirically, i write a lot of O'Caml code, and i never have to write
something in a non-intuitive manner to work around the type system. On the
contrary, every type error the compiler catches in my code indicates code
that *doesn't make sense*. I'd hate to imagine code that doesn't make
sense passing into regression testing. What if i forget to test a
non-sensical condition?

On the flip-side of the coin, i've also written large chunks of Scheme
code, and I *did* find myself making lots of nonsense errors that weren't
caught until run time, which significantly increased development time
and difficulty.

Furthermore, thinking about types during the development process keeps me
honest: i'm much more likely to write code that works if i've spent some
time understanding the problem and the types involved. This sort of
pre-development thinking helps to *eliminate* potential sources for bugs,
not introduce them. Even Scheme advocates encourage this (as in Essentials
of Programming Languages by Friedman, Wand, and Haynes).

> I think soft typing is a good compromise, because it is a mere add-on to
> an otherwise dynamically typed language, and it allows programmers to
> override the decisions of the static type system when they know better.

When do programmers know better? An int is an int and a string is a
string, and nary the twain shall be treated the same. I would rather
``1 + "bar"'' signal an error at compile time than at run time.

Personally, i don't understand all this bally-hoo about "dynamic languages"
being the next great leap. Static typing is a luxury!

William

mik...@ziplip.com

unread,
Oct 22, 2003, 2:52:24 PM10/22/03
to
Pascal Costanza wrote:

>
> ...because static type systems work by reducing the expressive power of
> a language. It can't be any different for a strict static type system.
> You can't solve the halting problem in a general-purpose language.

You keep repeating the same ignorant BS for gazillionth time! You
sound like Martelli talking about macros. Learn Haskell for fuck's
sake! You'll find that even though it's statically typed, it's also more
expressive than Lisp (you get more programming logic for each byte of code)
and it's easier to glue components together (because of purity
and monads)

(All-in-all, I prefer Lisp though. Not because it's more expressive,
but because I like Lisp REPL more than Haskell's)


It is loading more messages.
0 new messages