Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Programming is FUN again" rambling commentary

106 views
Skip to first unread message

Bill Coderre

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

Recently in comp.lang.scheme, there has been a thread called "Programming
is FUN again." Summing it up, it seems to me that people are enjoying
using Scheme and FORTH, as opposed to slogging through C.

I've been thinking a lot about what makes programming "FUN." Here's my opinion:

I started programming by writing BASIC programs. I typed them on a
typewriter, and verified them by hand. Most of these programs were never
run on a computer.

What did I write? Mostly simulation games -- the user was driving a tank
across a battlefield being shelled by an unseen army. If you got all the
way across alive, you won.

To me, the idea of presenting a world made out of ASCII characters was
pretty interesting. There are still newsgroups full of ASCII artists. This
is pretty weird considering that a pretty good Pentium or PowerMac will
get you a zillion polygons a second of 3D world monster-killing action.

Part of what made these programs FUN was what I *didn't* have to do to
make them work. No memory management. No operating system handholding. (I
once told someone that the reason I didn't like programming a Mac was that
I really didn't want to participate in making the cursor blink in every
text field -- couldn't I just ask for some input and get it back when the
user is done?)

Definition: A high-level language is one that doesn't require you to get
intimately involved with the lowest levels of the way your computer works.
Lisp and BASIC are both high-level because they don't require you to
manage memory, blink cursors, or dispatch every event to the right part of
the OS.

Maybe Lisp and Scheme and BASIC are FUN to program because you get to take
a holiday from dealing with the "low level" stuff in your computer.

Maybe they're FUN because you can translate a high-level idea into a
high-level program without having to do a lot of low-level stuff that just
gets in the way of your high-level idea.

None of this directly addresses FORTH being FUN, but I am guessing that
FORTH is sort of the contrapositive (is that the right word?), in that you
start at a low-level, and stay there. (Of course, my experience with FORTH
is non-existent, so feel free to call me uninformed.)

So FUN might be "getting to translate your ideas into code efficiently,
without having to spend a lot of time dealing with "bureacracy" code --
stuff that isn't part of the problem at hand, but is required before you
can see the results of your idea."

I know this is one reason I like Lisp -- I hate memory. I am very happy to
let Lisp deal with memory. I will GLADLY pay a speed penalty just to avoid
having to manage memory.

The fact is, though, that Lisp is usually a lot more fastidious about
managing memory correctly than I am -- after all, Lisp has many
person-eons of experience built into it, and that stuff is put into every
program I write, for free.

Perhaps if C programs did all of the error-checking and memory-management
they were supposed to, they would also be slower. (I'm sure there's cases
where Lisp is doing TOO MUCH error checking, and that's resulting in
unnecessary speed loss, but hey.)

It all boils down to this: I am not very much of a speed demon. If
something takes 1/10th of a second instead of 2/10ths, I am not going to
care, onesy-twosy like that (maybe when I have to do a million of them).

I don't know why, but C programmers often seem to be very vocal about
being interested in optimizing for speed. They also seem to get a kick out
of writing routines in as few characters as possible, and a frequent bad
habit of theirs seems to be "pre-optimizing" code, resulting in no speed
improvement, only readability loss.

Yes, playing THAT game (speed and cleverness) can also be FUN. (We've all
probably had fun like that.) Maybe that game is the C programmer's idea of
FUN, and if so, Lisp is the worst language they can imagine, because the
isolation from the "guts" of the computer makes it hard to play that game.

Goodness knows, memory and speed are easier to measure than "ratio of fun
code to bureaucracy code" or "elegance" or what have you.

bc

Will Hartung

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

b...@wetware.com (Bill Coderre) writes:

>Recently in comp.lang.scheme, there has been a thread called "Programming
>is FUN again." Summing it up, it seems to me that people are enjoying
>using Scheme and FORTH, as opposed to slogging through C.

>I've been thinking a lot about what makes programming "FUN." Here's my opinion:

>I started programming by writing BASIC programs. I typed them on a
>typewriter, and verified them by hand. Most of these programs were never
>run on a computer.

For me, PIF when I have some vision in my head, and that vision is
cast into the computer in an almost stream-of-consciousness fashion.

During this "Cortex Dump", the development environment only lightly
intrudes and, at best, reinforces my vision. This happens best when
I'm just coding something, and testing it with quick runs that happen
to work.

This type of event happens a lot where I'm in bed, stewing over
something, and then I just have to pop up, head to the computer, and
bash something into it. An hour later, it works, it's "done", and I
can go to sleep.

The results of such escapades are usually functional, but hardly
finished. In fact, they are almost never "finished". I rarely clean
them up.

I do this is 'C', but only on programs that max out to about 50 lines.
It almost inevitably backfires, and I end up fighting some stupid thing
or another. Its fun for about the first 70%, but then the bugs start
ruining the party, yet I've gone far enough that it's worth slogging
through it to get the program at least functional. These little
utilities end up being used for eternity, yet maintain their colloquial
names like "w2". Similiar things happen with 'awk', but they usually
end up in /tmp, and die a neglected death rapidly by a callous cron(1)
job.

I do it in QBasic on DOS. Windoze is horror enough, but it's even
worse without QBasic. I use QBasic like Minesweeper. It's braindead,
it doesn't get in the way, and can do simple things relatively
quickly. It also has a decent little environment. When things get to
tough going for QBasic, they usually end up getting abandoned. But
that's allright, as the trip is usually fun enough, even if I didn't
"get there".

If I was more familiar with Lisp AND its environment, then I'd be
using it. Lisp itself can have a fairly shallow learning curve. One of
its strengths is that it can be as easy or as hard as you want it to
be. It scales well from high school "Intro to Comp. Sci." classes to
the hairest applications mankind has fed to silicon beasties. I do
think that the environemnts have a steeper learning curve than the
language, and that's a lot of what I'm fighting today. "Let's learn
Common Lisp AND Emacs simultaneously!" Eyah! It has to be natural and
instinctive to be fun.

Lisps biggest weakness, now, for me, is it's hard to see the tree for
the forest. Scheme doesn't help because I usually can only find seeds
and pine cones.

But, overall, for me, PIF when I can paint my vision with thick
brushes, using wide long strokes and bright colors. This is one reason
I don't care much for GUI front-end work. Moving boxes around pixel by
pixel. Checking out hard copy from laser printers to make sure stuff
lines up properly. My "fun" GUI stuff looks a LOT more unfinished than
my "command line" stuff. Badly sized windows with boxes and buttons
placed more by "Seems to be enough room here" policy than any sense of
order. Sometimes I take the time to align the things up. There's a lot
of freedom in INPUT "Enter your name", A$.

"Everyday it's the same thing...Variety. I want something different!"
- The King via a Bugs Bunny cartoon.

--
Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison. -D. Duck

wan...@exploited.barmy.army

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

In article <bc-100398...@17.127.10.22>,

Bill Coderre <b...@wetware.com> wrote:
>Recently in comp.lang.scheme, there has been a thread called "Programming
>is FUN again." Summing it up, it seems to me that people are enjoying
>using Scheme and FORTH, as opposed to slogging through C.

And I can't blame them. I'd happily program in FORTH or Scheme
or Common Lisp or Haskell or a billion other languages (including
Ada) over C or C++ any day.

>
>I've been thinking a lot about what makes programming "FUN." Here's my opinion:
>
>I started programming by writing BASIC programs. I typed them on a
>typewriter, and verified them by hand. Most of these programs were never
>run on a computer.

90% of the satisfaction I get from writing a program is in
seeing it run and using it. Just writing it on paper isn't
enough for me. More power to you.


>
>What did I write? Mostly simulation games -- the user was driving a tank
>across a battlefield being shelled by an unseen army. If you got all the
>way across alive, you won.

I remember those kinds of games :).


>
>To me, the idea of presenting a world made out of ASCII characters was
>pretty interesting. There are still newsgroups full of ASCII artists. This
>is pretty weird considering that a pretty good Pentium or PowerMac will
>get you a zillion polygons a second of 3D world monster-killing action.

Good point. I think the fun is in the challenge. It's almost
like a crossword puzzle. You can't just draw what you want,
you have to pick and assemble the right pieces for the job.
Sorta like using different sized/shaped legos to build
something.


>
>Part of what made these programs FUN was what I *didn't* have to do to
>make them work. No memory management. No operating system handholding. (I
>once told someone that the reason I didn't like programming a Mac was that
>I really didn't want to participate in making the cursor blink in every
>text field -- couldn't I just ask for some input and get it back when the
>user is done?)

Amen. For Lisp, there's more, like the fact that I've got a
very powerful, very flexible, very dynamic language that
will let me do things I could never dream of doing in
other languages. Add to that the incredible power and
simplicity of the list, and it's not hard to see the
tremendous appeal Lisp has.


>
>Definition: A high-level language is one that doesn't require you to get
>intimately involved with the lowest levels of the way your computer works.
>Lisp and BASIC are both high-level because they don't require you to
>manage memory, blink cursors, or dispatch every event to the right part of
>the OS.

Well I'd consider Lisp to be higher level than Basic, because
it shields you from even more of the machine, and gives you more
power. Basic, while higher level than C is still not very
high level at all, and is very underpowered. It's easy, but
it's weak.


>
>Maybe Lisp and Scheme and BASIC are FUN to program because you get to take
>a holiday from dealing with the "low level" stuff in your computer.

Yeah, nothing beats not having to worry about segmentation
violations, and being able to actually treat all data as
first class members (a simple task that C is too braindead
to let you do).


>
>Maybe they're FUN because you can translate a high-level idea into a
>high-level program without having to do a lot of low-level stuff that just
>gets in the way of your high-level idea.

Right. Don't forget the interactive design. You can design
and test functions on the fly, without the compile/run/link/debug
cycle. I spent almost no time debugging Lisp programs. I
develop them in pieces as I go along, using very high level
abstractions to produce reliable modules.

In the time it takes me to write a trivial C program, I could
write 10 non-trivial Lisp ones and be well into my 11th. And
they'd work more reliably, be more flexible, and be more
maintainable too.


>
>None of this directly addresses FORTH being FUN, but I am guessing that
>FORTH is sort of the contrapositive (is that the right word?), in that you
>start at a low-level, and stay there. (Of course, my experience with FORTH
>is non-existent, so feel free to call me uninformed.)
>

Well, as low level as Forth is, it still allows you to do some
pretty powerful combinations, and it comes with a nice interactive
system so the interactive development comes into play.

So it sorta caters to both crowds.

I say this with limited experience with Forth as well.


>So FUN might be "getting to translate your ideas into code efficiently,
>without having to spend a lot of time dealing with "bureacracy" code --
>stuff that isn't part of the problem at hand, but is required before you
>can see the results of your idea."

Right.


>
>I know this is one reason I like Lisp -- I hate memory. I am very happy to
>let Lisp deal with memory. I will GLADLY pay a speed penalty just to avoid
>having to manage memory.

Definitely. For me Lisp always runs fast enough, so the performance
is never an issue.


>
>The fact is, though, that Lisp is usually a lot more fastidious about
>managing memory correctly than I am -- after all, Lisp has many
>person-eons of experience built into it, and that stuff is put into every
>program I write, for free.

Definitely.


>
>Perhaps if C programs did all of the error-checking and memory-management
>they were supposed to, they would also be slower. (I'm sure there's cases
>where Lisp is doing TOO MUCH error checking, and that's resulting in
>unnecessary speed loss, but hey.)

C would still suck however. It still wouldn't allow for higher
order functions (only miserable function pointers which are
no substitute), no dynamism, no first class data types, no
true module system, etc...

Memory is one of the issues, but it's no tthe only issue.


>
>It all boils down to this: I am not very much of a speed demon. If
>something takes 1/10th of a second instead of 2/10ths, I am not going to
>care, onesy-twosy like that (maybe when I have to do a million of them).

I would be very happy to take an even greater speed hit!


>
>I don't know why, but C programmers often seem to be very vocal about
>being interested in optimizing for speed. They also seem to get a kick out
>of writing routines in as few characters as possible, and a frequent bad
>habit of theirs seems to be "pre-optimizing" code, resulting in no speed
>improvement, only readability loss.

That's because C encourages this faulty mind-set. The whole
attitude of C is "obfusicate your code so that your program
will run fast on a VAX PDP-11".

Besides, C doesn't have anything to offer, so the advocates
have to try to convince themselves that the language they are
using is worthwhile by doing such pointlessly stupid tricks.


>
>Yes, playing THAT game (speed and cleverness) can also be FUN. (We've all
>probably had fun like that.) Maybe that game is the C programmer's idea of
>FUN, and if so, Lisp is the worst language they can imagine, because the
>isolation from the "guts" of the computer makes it hard to play that game.

Sure that was fun, back when I was in college and wanted to see
how I could directly access the guts of the system and do all
those things I wasn't supposed to. But when it came time to
write a real program, it all became a liability, and C was
utterly useless.


>
>Goodness knows, memory and speed are easier to measure than "ratio of fun
>code to bureaucracy code" or "elegance" or what have you.
>

Not to those doing the coding :)

>bc

--
Regards,
Ahmed

My email address is punkrock at cs dot uh dot edu.

wan...@exploited.barmy.army

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

In article <vfr750Ep...@netcom.com>,
Will Hartung <vfr...@netcom.com> wrote:

[Snip]

>If I was more familiar with Lisp AND its environment, then I'd be
>using it. Lisp itself can have a fairly shallow learning curve. One of
>its strengths is that it can be as easy or as hard as you want it to
>be. It scales well from high school "Intro to Comp. Sci." classes to
>the hairest applications mankind has fed to silicon beasties. I do
>think that the environemnts have a steeper learning curve than the
>language, and that's a lot of what I'm fighting today. "Let's learn
>Common Lisp AND Emacs simultaneously!" Eyah! It has to be natural and
>instinctive to be fun.
>

I haven't found Lisp environments difficult to use at all. Emacs
is quite easy to use (you can do a ton of stuff, but you needn't
know but a little to do what you want), and the various other
Lisp systems out there (ACL, FreeLisp, CMUCL) are very easy.
Their entire integration makes development far easier than
batch interpretation or compilation. If I'm not sure what
a line may do, I can dump it right in the read-eval loop
and play around with it. Of course since Lisp is truly high
level, I can write code in a functional style and use formal
methods if I so desire (try doing that in C or any other
imperative language).


>Lisps biggest weakness, now, for me, is it's hard to see the tree for
>the forest. Scheme doesn't help because I usually can only find seeds
>and pine cones.

I'm not quite sure what you mean by that. Would you mind
elaborating?


>
>But, overall, for me, PIF when I can paint my vision with thick
>brushes, using wide long strokes and bright colors. This is one reason
>I don't care much for GUI front-end work. Moving boxes around pixel by
>pixel. Checking out hard copy from laser printers to make sure stuff
>lines up properly. My "fun" GUI stuff looks a LOT more unfinished than
>my "command line" stuff. Badly sized windows with boxes and buttons
>placed more by "Seems to be enough room here" policy than any sense of
>order. Sometimes I take the time to align the things up. There's a lot
>of freedom in INPUT "Enter your name", A$.

Yeah, GUI stuff is a dreadful bore. I can't think of anything
more tedious than writing some GUI front end, except maybe
fixing build errors for a Fortran or Cobol program.


>
>"Everyday it's the same thing...Variety. I want something different!"
> - The King via a Bugs Bunny cartoon.
>
>--
>Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
>1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
>1993 Explorer - Cage? Hell, it's a prison. -D. Duck

Dan Higdon

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

wan...@exploited.barmy.army wrote in message
<6e51t4$lh4$1...@Masala.CC.UH.EDU>...

>In article <bc-100398...@17.127.10.22>,
>Bill Coderre <b...@wetware.com> wrote:
>>Recently in comp.lang.scheme, there has been a thread called "Programming
>>is FUN again." Summing it up, it seems to me that people are enjoying
>>using Scheme and FORTH, as opposed to slogging through C.
>
>And I can't blame them. I'd happily program in FORTH or Scheme
>or Common Lisp or Haskell or a billion other languages (including
>Ada) over C or C++ any day.

I'd happily chose SML (or scheme) over C/C++ any day of the week if
I could. However, realtime 3D simulations that have to target common
home computers really can't afford that luxury. C/C++ is your
only real choice. For me, speed is the PRIMARY concern, with
OS integration a close second.

>90% of the satisfaction I get from writing a program is in
>seeing it run and using it.

For me, 75% of my satisfaction comes from knowing that I've
come up with a valid and sensible (in time and space)
solution to the problem. The other 25% is watching it
run. I guess that's why I love scheme so much - it really
lends itself to algorithmic tweaking. Forth (my previous
favorite) is the same way, but I feel that Forth has been
left behind. It was great on 8-bit micros, but is a little
"embedded" feeling for modern systems, IMHO.

>Amen. For Lisp, there's more, like the fact that I've got a
>very powerful, very flexible, very dynamic language that
>will let me do things I could never dream of doing in
>other languages. Add to that the incredible power and
>simplicity of the list, and it's not hard to see the
>tremendous appeal Lisp has.

Yes, I agree (substituting scheme for lisp - I find lisp's
symbol value/function dichotomy annoying). Add to that
SML's rabid typechecking (if it compiles, chances are it
will do what you want first time), and THAT's a fun
programming system. I haven't had the chance to really
hammer DrScheme's "Mr.Spidey" utility, but I suspect I
would really appreciate that sort of analysis as well.

>>Maybe Lisp and Scheme and BASIC are FUN to program because you get to take
>>a holiday from dealing with the "low level" stuff in your computer.
>
>Yeah, nothing beats not having to worry about segmentation
>violations, and being able to actually treat all data as
>first class members (a simple task that C is too braindead
>to let you do).

I really don't like being put in the position to defend C, but this C
bashing
is getting a little out of hand.

I take it you've never programmed in any sort of assembly language?
There is a very zen-like enjoyment you can get from writing to the
bare metal. You get exactly what you ask for, no more, no less.
If you've never tried it, you probably won't understand. It's a very
similar rush to what Forth gives you. C is like that as well.
Sure, it's got a list of problems a mile long, and C++ has become
a travesty of languages (especially the new ANSI spec - shudder),
but straight, classic ANSI-C is enjoyable in its own right. But
I can't stand those obfuscation nuts either - bad programming
is bad programming, no matter what you call it.

>Right. Don't forget the interactive design. You can design
>and test functions on the fly, without the compile/run/link/debug
>cycle. I spent almost no time debugging Lisp programs. I
>develop them in pieces as I go along, using very high level
>abstractions to produce reliable modules.

Interactive design is a MAJOR strength of lisp/scheme/sml.
I can't even guess how many hours of my life I could have
saved if C++ had a simple interactive testing environment.

>In the time it takes me to write a trivial C program, I could
>write 10 non-trivial Lisp ones and be well into my 11th. And
>they'd work more reliably, be more flexible, and be more
>maintainable too.

That's because you're probably not very good at C. :-)
But all exaggeration aside, I'm sure that's largely true. Remember
C is really a platform independed assember with limited block
structuring. As such, it's designed to give you direct access to
machine operations. So it's not really fair to compare with lisp.
Just as a hunch, I'd guess that a GIF decoder would be easier to
express in C than in Lisp, and would run much faster. Of course,
an A* search algorithm would have the exact opposite property.

>>None of this directly addresses FORTH being FUN, but I am guessing that
>>FORTH is sort of the contrapositive (is that the right word?), in that you
>>start at a low-level, and stay there. (Of course, my experience with FORTH
>>is non-existent, so feel free to call me uninformed.)

That's pretty close. Forth also lets you define new compiling words
(similar to macros in scheme), and the syntax can lend itself to some very
simple and intuitive usages.

>>>So FUN might be "getting to translate your ideas into code efficiently,
>>without having to spend a lot of time dealing with "bureacracy" code --
>>stuff that isn't part of the problem at hand, but is required before you
>>can see the results of your idea."

I'll buy that.

>Definitely. For me Lisp always runs fast enough, so the performance
>is never an issue.

Sadly, that's not the case for me. Also, I don't know of any Lisp systems
that would let me interface with Microsloth's DirectX interfaces - also
a requirement of my apps.

>C would still suck however. It still wouldn't allow for higher
>order functions (only miserable function pointers which are
>no substitute), no dynamism, no first class data types, no
>true module system, etc...

Yep - don't try to make C something it isn't. C++ tried to
extend C into a language suitable for application development,
but IMHO has crumbled under its own creeping featuritis.

>Memory is one of the issues, but it's no tthe only issue.

I could wax boorish in a big way about the flaws of C/C++,
but that's quite off topic.

>>I don't know why, but C programmers often seem to be very vocal about
>>being interested in optimizing for speed. They also seem to get a kick out
>>of writing routines in as few characters as possible, and a frequent bad
>>habit of theirs seems to be "pre-optimizing" code, resulting in no speed
>>improvement, only readability loss.

Maybe that's because we've chosen C due to the fact that we need
to fly through an amazing number of computations a second more than
we need memory safety.

I suspect some lisp programmers get off on using the most obscure corners
of the Common Lisp spec to get something done. The hacker mentality
comes in all flavors - you hack whatever system you're using.

>That's because C encourages this faulty mind-set. The whole
>attitude of C is "obfusicate your code so that your program
>will run fast on a VAX PDP-11".

Amusing, but untrue.

>Besides, C doesn't have anything to offer, so the advocates
>have to try to convince themselves that the language they are
>using is worthwhile by doing such pointlessly stupid tricks.

C has exactly three things to offer - easy availability of good quality
compilers, the ability to generate fast, small footprint executables,
and the ability to replace assembly language as an implementation
language for a large class of problems.

If you need those things, you don't really have a credible alternative
for professional ISV development. Oh yeah - a fourth thing C has
is easy access to pretty much any current operating systems system
calls, which is also invaluable for commercial software.

>>Goodness knows, memory and speed are easier to measure than "ratio of fun
>>code to bureaucracy code" or "elegance" or what have you.
>
>Not to those doing the coding :)

Amen. I eagerly await the day that C/C++ can be dethroned. Already,
people in the industry are starting to come around to realizing how
crappy the C/C++ family of languages is for large-scale development
of sophisticated apps. Now, if our "fun" languages could only replace
C in the real world, programming for a living could become fun again.

(postscript - I just came of a 5 hour debugging session where I ultimately
discovered that some RAM had been decommitted a few instructions too
early. How much Lisp could you sling in 5 hours? <Sigh>)

----------------------------------------
hd...@charybdis.com
There's no one left to finger
No one here to blame


wan...@exploited.barmy.army

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

In article <HrpN.415$SA3.3...@typhoon.texas.net>,
Dan Higdon <hd...@charybdis.com> wrote:
>wan...@exploited.barmy.army wrote in message

[Snip]

>>And I can't blame them. I'd happily program in FORTH or Scheme
>>or Common Lisp or Haskell or a billion other languages (including
>>Ada) over C or C++ any day.
>
>I'd happily chose SML (or scheme) over C/C++ any day of the week if
>I could. However, realtime 3D simulations that have to target common
>home computers really can't afford that luxury. C/C++ is your
>only real choice. For me, speed is the PRIMARY concern, with
>OS integration a close second.
>

Well at work I'm restricted by something a little more down to
earth. They use C/C++/Fart-ran (misspelling intentional),
so I'm stuck using them. At home, Lisp does everything
I want it to do with perfectly acceptable performance so
I don't have to stain my hands over there.


>>90% of the satisfaction I get from writing a program is in
>>seeing it run and using it.
>
>For me, 75% of my satisfaction comes from knowing that I've
>come up with a valid and sensible (in time and space)
>solution to the problem. The other 25% is watching it
>run. I guess that's why I love scheme so much - it really
>lends itself to algorithmic tweaking. Forth (my previous
>favorite) is the same way, but I feel that Forth has been
>left behind. It was great on 8-bit micros, but is a little
>"embedded" feeling for modern systems, IMHO.

Hmmmm good point, maybe I should revise my "90%" figure :)


>
>>Amen. For Lisp, there's more, like the fact that I've got a
>>very powerful, very flexible, very dynamic language that
>>will let me do things I could never dream of doing in
>>other languages. Add to that the incredible power and
>>simplicity of the list, and it's not hard to see the
>>tremendous appeal Lisp has.
>
>Yes, I agree (substituting scheme for lisp - I find lisp's
>symbol value/function dichotomy annoying).

Scheme IS Lisp. It's a dialect of Lisp. You are confusing
a family of languages (Lisp) with specific implementations
(like Common Lisp). I understand what you are trying to say,
but you should be more specific to avoid confusion.


> Add to that
>SML's rabid typechecking (if it compiles, chances are it
>will do what you want first time), and THAT's a fun
>programming system. I haven't had the chance to really
>hammer DrScheme's "Mr.Spidey" utility, but I suspect I
>would really appreciate that sort of analysis as well.
>

Have you by any chance tried Haskell? That has some serious
type checking and type inferencing, and IMO a nicer syntax
than the ML family of languages. It is purely functional
so when you get to IO you'll have to deal with monads, but
it's a great language and also tons of fun.


[Snip]

>>Yeah, nothing beats not having to worry about segmentation
>>violations, and being able to actually treat all data as
>>first class members (a simple task that C is too braindead
>>to let you do).
>
>I really don't like being put in the position to defend C, but this C
>bashing
>is getting a little out of hand.

It isn't bashing, it's a statement of fact.


>
>I take it you've never programmed in any sort of assembly language?

Yes I have. I've programmed in VAX PDP-11, x86 and 65xx
(C64 in particular -- 6510) assembly languages.


>There is a very zen-like enjoyment you can get from writing to the
>bare metal. You get exactly what you ask for, no more, no less.
>If you've never tried it, you probably won't understand. It's a very
>similar rush to what Forth gives you. C is like that as well.

I can understand the thrill of programming to the bare metal,
but C does NOT give me that thrill. C is low level enough
to make even the most trivial problems a pain in the neck,
but high level enough that you don't get the thrill of
programming to the bare metal.


>Sure, it's got a list of problems a mile long, and C++ has become
>a travesty of languages (especially the new ANSI spec - shudder),
>but straight, classic ANSI-C is enjoyable in its own right. But
>I can't stand those obfuscation nuts either - bad programming
>is bad programming, no matter what you call it.

Well I see nothing enjoyable or even worthwhile in C, so I guess
we'll just have to agree to disagree.


>
>>Right. Don't forget the interactive design. You can design
>>and test functions on the fly, without the compile/run/link/debug
>>cycle. I spent almost no time debugging Lisp programs. I
>>develop them in pieces as I go along, using very high level
>>abstractions to produce reliable modules.
>
>Interactive design is a MAJOR strength of lisp/scheme/sml.
>I can't even guess how many hours of my life I could have
>saved if C++ had a simple interactive testing environment.
>

Yes, that feature alone would seriously make life easier,
even in C/C++. It's especially agonizing when you take
a LONG time to compile (where I work, compilation times of
15 - 30 minutes are not unusual).

I remember reading about a C interpreter a LONG time ago.
It may have been a sort of garage project in somebody's
spare time, but I thought you'd be interested in knowing
that there may have been such a beast somewhere :).


>>In the time it takes me to write a trivial C program, I could
>>write 10 non-trivial Lisp ones and be well into my 11th. And
>>they'd work more reliably, be more flexible, and be more
>>maintainable too.
>
>That's because you're probably not very good at C. :-)

Actually, I know C better than I know Scheme, Lisp, or Haskell,
yet I can do better using the latter 3 languages.


>But all exaggeration aside, I'm sure that's largely true. Remember
>C is really a platform independed assember with limited block
>structuring. As such, it's designed to give you direct access to
>machine operations. So it's not really fair to compare with lisp.
>Just as a hunch, I'd guess that a GIF decoder would be easier to
>express in C than in Lisp, and would run much faster. Of course,
>an A* search algorithm would have the exact opposite property.

Comparing C and Lisp is fair, since both are general purpose
languages. Just because C is a portable psuedo-assembler
does not make it immune from comparison or criticism. We're
talking about getting the job done, and intentions mean nothing now,
only results.

As for a GIF decoder, since I'm not sure what the algorithm
is (I suppose I could find out), I can't say whether or not
C would be better at it than Lisp. A* search I do know and
agree with however.


[Snip]

>That's pretty close. Forth also lets you define new compiling words
>(similar to macros in scheme), and the syntax can lend itself to some very
>simple and intuitive usages.

The one thing that got to me about Forth (and this may be due to my
own ignorance of the language at the time) was the inability to
get at any data lower than 3 levels down on the stack without
popping everything before it off. Things like rotate and swap
let me re-arrange stuff 3 levels deep, but after that I found
myself scratching my head. It's very possible that I wasn't
writing code in the spirit of Forth (ie: trying to write C in
Forth).

One of the things I liked was the way many definitions were
mostly tacit. That was nice (it's also a nice feature of
Haskell and even J). Whether tacit definitions give you any
kind of gain apart from fun and joy is another matter (they
could be considered higher level and closer to the way
we think about a problem and save some typing).


[Snip]

>>Definitely. For me Lisp always runs fast enough, so the performance
>>is never an issue.
>
>Sadly, that's not the case for me. Also, I don't know of any Lisp systems
>that would let me interface with Microsloth's DirectX interfaces - also
>a requirement of my apps.
>

Hmmm, not even Allegro Common Lisp?


>>C would still suck however. It still wouldn't allow for higher
>>order functions (only miserable function pointers which are
>>no substitute), no dynamism, no first class data types, no
>>true module system, etc...
>
>Yep - don't try to make C something it isn't. C++ tried to
>extend C into a language suitable for application development,
>but IMHO has crumbled under its own creeping featuritis.

Hmmmm, I'd rather have too much than too little, so I would
consider C++ to be better than C (at least it did go on to
correct some of C's flaws like having to pass pointers to
function parameters just to modify them). I am not saying
that C++ is good mind you, but I do consider it to be better
than C.

I would say that C++'s problem is the fact that it's trying
to graft a high level concept on what is essentially a
low level language.


>Maybe that's because we've chosen C due to the fact that we need
>to fly through an amazing number of computations a second more than
>we need memory safety.
>
>I suspect some lisp programmers get off on using the most obscure corners
>of the Common Lisp spec to get something done. The hacker mentality
>comes in all flavors - you hack whatever system you're using.

You are right, however the attitude of using a dozen side effects
in any statement seems to be widely entrenched in C attitudes.
Many of the books I read recommended using such tricks.


>
>>That's because C encourages this faulty mind-set. The whole
>>attitude of C is "obfusicate your code so that your program
>>will run fast on a VAX PDP-11".
>
>Amusing, but untrue.

Well if you read the books and look at code, they're all
encouraging obfusication for the purposes of "efficiency".
But these tend to be low level considerations that seem
to rely on architectural-dependent issues. Can you say
that these tricks will guarantee faster code on every
platform?


>
>>Besides, C doesn't have anything to offer, so the advocates
>>have to try to convince themselves that the language they are
>>using is worthwhile by doing such pointlessly stupid tricks.
>
>C has exactly three things to offer - easy availability of good quality
>compilers, the ability to generate fast, small footprint executables,
>and the ability to replace assembly language as an implementation
>language for a large class of problems.
>

Wouldn't Forth have the same three things to offer? I don't
recall Forth being compiled however.

>If you need those things, you don't really have a credible alternative
>for professional ISV development. Oh yeah - a fourth thing C has
>is easy access to pretty much any current operating systems system
>calls, which is also invaluable for commercial software.

Since whether or not you have O/S system calls is an
implementation dependent issue, I'm assuming you are speaking
with regards to calling conventions and other low-level
isms which makes the addition of such calls easier? Correct
me if I'm wrong.


[Snip]

>>Not to those doing the coding :)
>
>Amen. I eagerly await the day that C/C++ can be dethroned. Already,
>people in the industry are starting to come around to realizing how
>crappy the C/C++ family of languages is for large-scale development
>of sophisticated apps. Now, if our "fun" languages could only replace
>C in the real world, programming for a living could become fun again.
>

Amen. I truly hope that day will come, but to be honest I have
very little faith in the computing industry. Yes, researchers
are doing some very exciting things, but the industry at large
is dissapointing to say the least.

I fear that if C/C++ die, then something even worse will replace
them.


>(postscript - I just came of a 5 hour debugging session where I ultimately
>discovered that some RAM had been decommitted a few instructions too
>early. How much Lisp could you sling in 5 hours? <Sigh>)

Exactly.


>
>----------------------------------------
>hd...@charybdis.com
>There's no one left to finger
>No one here to blame
>
>
>


--
Regards,
Ahmed

My email address has been altered to avoid spam. Please send
email to punkrock at cs dot uh dot edu.


Martin Rodgers

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

Bill Coderre wheezed these wise words:

> None of this directly addresses FORTH being FUN, but I am guessing that
> FORTH is sort of the contrapositive (is that the right word?), in that you
> start at a low-level, and stay there. (Of course, my experience with FORTH
> is non-existent, so feel free to call me uninformed.)

What makes Forth fun, for me and most of the programmers I know who
use (or have used) Forth, is the ease with which you can build new
tools. Perhaps Leo Brodie's "Thinking Forth" is the Forth answer to
Paul Graham's "On Lisp"?

There are many characterstics that we can use to distinguish families
of languages, or entire classes. My favourite characteristic is the
"one world" approach used by Lisp, Forth, Smalltalk, and no doubt many
others (APL?). All interactive "one world" languages with incremental
compilers appeal to me.

Curiously, I've never liked batch oriented Forth compilers, except for
the meta compilers used to build Forth systems. Yep, another feature
of these systems is that they can build themselves, just like in the
last chapter of SICP. Some Forth systems even have fast load modules!

Now, meta compilers are serious fun. ;) Check out the Cassidy Forth
meta compiler...
--
Please note: my email address is munged; You can never browse enough
"There are no limits." -- ad copy for Hellraiser

Martin Rodgers

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

wan...@exploited.barmy.army wheezed these wise words:

> And I can't blame them. I'd happily program in FORTH or Scheme
> or Common Lisp or Haskell or a billion other languages (including
> Ada) over C or C++ any day.

Snap! C/C++ is pure batch oriented hell. If you like these languages,
and the tools for them, then you're not far from using punched cards.



> >What did I write? Mostly simulation games -- the user was driving a tank
> >across a battlefield being shelled by an unseen army. If you got all the
> >way across alive, you won.
>
> I remember those kinds of games :).

So do I! I still have a wonderful book called Stimulating Simulations.
I used to used the Monster Chase program as a way of exploring any
language that I was learning. It was perfect in Forth. When I started
using Lisp, this changed. I moved from resursive desent parsers, in C,
to code that _wrote_ parsers, in Lisp. NFA and DFA crunching.

After that, I mainly wrote code to crunch Lisp expressions. ;) No more
interactive ASCII graphical games. The games were more abstract. The
hex dumps were replaced by s-exprs, DFA graphs, parse trees, etc.

Loads of fun.

Martin Rodgers

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

wan...@exploited.barmy.army wheezed these wise words:

> Actually, I know C better than I know Scheme, Lisp, or Haskell,


> yet I can do better using the latter 3 languages.

Same here. At one time, the only language I knew better than C was
Forth. Now I know C well enough to appreciate not using it.

The pro C argument I see most often is based on how well C is suited
to device drivers, which amuses me. I used to write device drivers in
Forth with no sweat, but I wouldn't touch a driver that needs to run
at the kernel level. Is this the best domain for C? Something that is
inherently painful? How many apps run inside the kernel?

And then we have the tools to make C "easier", like Bounds Checker.
Why are tools like this not needed in Lisp? The security they offer is
already provided by Lisp.

We could argue that pure FP languages like Haskell offer even more,
because they make state dependancies explicit. However, this state
independance could also be acheived in Lisp, by using a pure FP style
(easy enough in Scheme). If you want something like Monad IO, this too
could be done. Perhaps it's just a question of discipline?

How easy is it to do this in C/C++? I know there's a book about
functional programming in C, but I've not seen it yet. Unfortunately,
I don't know of any tools for C/C++ that would work in an FP style.
Imagine the difficulty of interfacing a typical GUI framework in C++
with some C code that uses continuations. While in theory it could be
done, it would require considerably more discipline than most C/C++
programmers are used to, and I keep reading about problems in C++
frameworks (written by pro C++ folk). If you have to work with non-FP
folk, this will be a disaster. Their code has to work with yours, and
yours must work with theirs. Any problems will be blamed on the least
"trusted" code, and that'll be the FP code. "Trusted", in this
context, is a political attribute, and will therefore vary depending
on who you ask. I don't trust C++.

Like C++ tools, it may be best to avoid C++ folk. ;) (If you can.)
This isn't bashing, it's just advice for like minded individuals. Nor
is it whinging to point out that C++ folk have a high profile. I often
get blank looks when I mention Lisp to people. I sometimes think I
should get a little card printed with the URL for "Lisp: Good News Bad
News How to Win Big".

I could hand them out faster than my business cards...


--
Please note: my email address is munged; You can never browse enough

"Oh knackers!" - Mark Radcliffe

Christopher Lee

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

>>>>> "W" == wan...@exploited.barmy.army wheezed these wise words:
W> Actually, I know C better than I know Scheme, Lisp, or Haskell,
W> yet I can do better using the latter 3 languages.

>>>>> "M" == Martin Rodgers <m...@this.email.address.intentionally.left.crap.wildcard.demon.co.uk> writes:

M> Same here. At one time, the only language I knew better than C
M> was Forth. Now I know C well enough to appreciate not using it.

M> The pro C argument I see most often is based on how well C is
M> suited to device drivers, which amuses me. I used to write
M> device drivers in Forth with no sweat, but I wouldn't touch a
M> driver that needs to run at the kernel level. Is this the best
M> domain for C? Something that is inherently painful? How many
M> apps run inside the kernel?

I use both Scheme and C for my programming. As much as I like Scheme
and use it as much as I can, it doesn't seem to make sense for many
different kinds of applications. Here are a few things I have done
recently for which C has seemed the best solution:

- Real-time robot control code (I actually wrote a simple embedded
real-time Scheme interpreter for _high-level_ control of the
robot, but I would never use it for computing real-time robot
dynamics and control equations at a lower-level).
Granted, I could use something like Ada and possibly Forth for
this, but I wouldn't use Lisp.
- Writing a real-time garbage collector for a Scheme interpreter
(this requires interfacing with the OS for memory allocation,
laying out heaps of conscells in the allocated memory, and all
sorts of nasty bit-twiddling in unused least-significant-bits of
memory pointer representations for space efficiency).
Writing this in Lisp makes no sense to me. I'm not even sure
this would be too easy to do any other language besides C
(I want portability, so assembly language isn't an option).
- Animating an OpenGL rendering of a hand from a stream of
finger-joint data from a Cyberglove for a virtual reality
environment.

Knowing that I generally have a working C compiler available no matter
what system I use or what else is going wrong with the system, is also
extremely important.

-Chris

ps. I can't stand C++, but I'll admit I once thought it was cool.

Dan Higdon

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

wan...@exploited.barmy.army wrote in message
<6e65bn$ohb$1...@Masala.CC.UH.EDU>...

>In article <HrpN.415$SA3.3...@typhoon.texas.net>,
>Dan Higdon <hd...@charybdis.com> wrote:
>>wan...@exploited.barmy.army wrote in message
>
>[Snip]
>Well at work I'm restricted by something a little more down to
>earth. They use C/C++/Fart-ran (misspelling intentional),
>so I'm stuck using them. At home, Lisp does everything
>I want it to do with perfectly acceptable performance so
>I don't have to stain my hands over there.

Looks like we're kindred spririts of a sort, substituting
SML for Lisp. :-) I still can't believe I'm defending C/C++,
since I'm their biggest detractors here at work. Oh well,
back to it....

>Scheme IS Lisp. It's a dialect of Lisp. You are confusing
>a family of languages (Lisp) with specific implementations
>(like Common Lisp). I understand what you are trying to say,
>but you should be more specific to avoid confusion.

Absolutely - a subtle but important difference. (Chalk it up to
a very long day.) Common Lisp was the dialect I specifically
don't care for, although it does seem to be the most capable
of "real world" development, due to its extensive tools.

>Have you by any chance tried Haskell? That has some serious
>type checking and type inferencing, and IMO a nicer syntax
>than the ML family of languages. It is purely functional
>so when you get to IO you'll have to deal with monads, but
>it's a great language and also tons of fun.

Yes - actually I got my start in non-lisp functional languages with
Gofer, a dialect of Haskell. I agree - I really like the syntax,
especially the "literate" versions, where the "normal" text was
comments, and you had to mark the source lines!
In the end, I'm using SML because I like the fact that it's not
quite as pure functional, and it's not lazy. (Although you can
simulate lazy evaluation like you do in Scheme - with promises.)
Lazy languages seem a little tricky to reason about mathematically.
Perhaps that's just because I'm a little out of touch with modern
CS practices.

>Yes I have. I've programmed in VAX PDP-11, x86 and 65xx
>(C64 in particular -- 6510) assembly languages.

I stand corrected. (Gotta love the 6502 stuff - first RISC chip, IMHO)

>Well I see nothing enjoyable or even worthwhile in C, so I guess
>we'll just have to agree to disagree.

Agreed. That's why there are so many computer languages. :-)

>I remember reading about a C interpreter a LONG time ago.
>It may have been a sort of garage project in somebody's
>spare time, but I thought you'd be interested in knowing
>that there may have been such a beast somewhere :).

Yep, I remember that one too. Strange but interesting idea.
I don't remember how well it worked.

>>That's because you're probably not very good at C. :-)
>Actually, I know C better than I know Scheme, Lisp, or Haskell,
>yet I can do better using the latter 3 languages.

(The smiley was for sarcasm - I'm glad you didn't take offense at my
statement, I *was* just joking.) I've found that to be true as well.
I actually freaked out a coworker by prototyping a particularly
tricky algorithm in SML first. The "fun language" world has another
convert now! :-)

>Comparing C and Lisp is fair, since both are general purpose
>languages. Just because C is a portable psuedo-assembler
>does not make it immune from comparison or criticism. We're
>talking about getting the job done, and intentions mean nothing now,
>only results.

Fair enough. That would be like holding Lisp's symbolic processing
nature against it for numeric or data processing applications.

>As for a GIF decoder, since I'm not sure what the algorithm
>is (I suppose I could find out), I can't say whether or not

It involves a lot of pointer arithmetic and such. You could certainly
do it in Lisp, but my argument is that expressing memory accesses
and pointer arithemetic is cleaner in C than Lisp, since C has
notations to directly support these activities. Much like lisp has
notation for list handling, which is mildly painful in C.

>C would be better at it than Lisp. A* search I do know and
>agree with however.

>The one thing that got to me about Forth (and this may be due to my


>own ignorance of the language at the time) was the inability to
>get at any data lower than 3 levels down on the stack without
>popping everything before it off. Things like rotate and swap

[snip]

Most Forths have and 'index' word that you can use to go arbitrarily
deep into the stack. Modern Forth dialects even let you declare
"parameters", which become local words that produce the value
passed in, although that's kind of "cheating" IMHO. :-)

>>Sadly, that's not the case for me. Also, I don't know of any Lisp systems
>>that would let me interface with Microsloth's DirectX interfaces - also
>>a requirement of my apps.
>>
>Hmmm, not even Allegro Common Lisp?

Don't know about that one, so I may be wrong about that. It's likely that
Allegro Common Lisp wouldn't fit my memory footprint though.
(Still doesn't validate my argument here - it would be interesting to
know if Allegro can call DirectX. I'd better do some research....)

>Hmmmm, I'd rather have too much than too little, so I would
>consider C++ to be better than C (at least it did go on to
>correct some of C's flaws like having to pass pointers to
>function parameters just to modify them). I am not saying
>that C++ is good mind you, but I do consider it to be better
>than C.

I program almost exclusively in C++, so I'd tend to agree with
you. The problem is, unlike Lisp, grafting extra features onto
the language just makes it more cumbersome and complex.

I think CLOS is a good example of how Lisp can be cleanly
expanded to new programming models without kludging up
the original language. One of the advantages of not really
having a syntax, I suppose. :-)

(I've only ever looked at CLOS, so I have no real experience
with it, and may be completely off-base)

>I would say that C++'s problem is the fact that it's trying
>to graft a high level concept on what is essentially a
>low level language.

Amen, brother. Leo Brodie postulated that Forth is neither a
high-level nor low-level language, but an "all level" language,
because you could seamlessly go from assembly programming
to arbitrarily abstract interfaces. Lisp is the same way, shifted
up the "level" a little. You can happily write everything from
compilers to sophisticated AI systems in it, all without doing
anything wierd to the language.

That's why Lisp is fun to me.

>You are right, however the attitude of using a dozen side effects
>in any statement seems to be widely entrenched in C attitudes.
>Many of the books I read recommended using such tricks.

Yes, it does seem to be a C-cultural thing. Some of those tricks
are accepted idoms of the language however, so I don't object
to their use.

>Well if you read the books and look at code, they're all
>encouraging obfusication for the purposes of "efficiency".

"Obfuscation is in the eye of the beholder". I think you're
referring to rampant abuses of the pre and pos-increment
operators , such as "while ((*a++ = *b++) != '\0');" for
a string copy. Yeah, it's ugly, but most C programmers
understand it as an idiom. I personally would use strcpy(),
but I try to focus on readability. Probably because I'm a
freak who likes to program in functional languages on his
off time. :-)

>But these tend to be low level considerations that seem
>to rely on architectural-dependent issues. Can you say
>that these tricks will guarantee faster code on every
>platform?

No, indeed you can't. That's why I shy away from them for
the most part. But, I sometimes find myself disassembling code
to see if I can trick the compiler into producing better code for
a given expression. After a while, you find out what works and
what doesn't for your compiler.

Actually, that last paragraph may be the most damning criticism
of C/C++ I can think of! Imagine having to program in a language
where second-guessing the code generator was even something
you were TEMPTED to do! Scary. Scarier still is that I've often
wondered how to disassemble SML functions. It's a sickness -
I think I might need professional help. :-)

>Wouldn't Forth have the same three things to offer? I don't
>recall Forth being compiled however.

Yes, it does. All Forths are compiled in some way. Traditional
Forths compile to a list of word pointers and constants. Some
newer Forths compile to "direct threaded" code, where the target
of every call is real machine code, and the interpreter is invoked
as needed.

>Since whether or not you have O/S system calls is an
>implementation dependent issue, I'm assuming you are speaking
>with regards to calling conventions and other low-level
>isms which makes the addition of such calls easier? Correct
>me if I'm wrong.

Yes, it is an implementation issue for the most part. The C calling
conventions mesh very well with OS calling conventions. Probably
because many of those are implemented in C. It's true that a
sufficiently well-integrated Lisp system would be able to do all
these things. I may be falling prey to the fallacy that most Lisp
systems want mammoth memory footprints, and don't really
let you get "down and dirty" with the OS. I would still question
the validity of trying to write "Quake" in Lisp, however.

>Amen. I truly hope that day will come, but to be honest I have
>very little faith in the computing industry. Yes, researchers
>are doing some very exciting things, but the industry at large
>is dissapointing to say the least.
>
>I fear that if C/C++ die, then something even worse will replace
>them.

Like Java! :-) I've followed a lot of the so-called practical languages,
like Eiffel, Dylan and Oberon in hopes that a better GP languages
might catch on, but so far, it's been all for naught. Lisp languages
have the disadvantage that many find their syntax (or lack thereof)
difficult to read, which is probably the largest obstacle to Lisp world
dominance. Look at Java - I suspect the reason people even consider
looking at it is that it looks just like a light version of C++.

Sigh. I suppose most of us will have to continue writing fun programs
in fun languages in our off-time, and slog through cruddy, cryptic
languages at work.

Will Hartung

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

wan...@exploited.barmy.army writes:

>In article <vfr750Ep...@netcom.com>,
>Will Hartung <vfr...@netcom.com> wrote:

>[Snip]

>>>"Let's learn Common Lisp AND Emacs simultaneously!" Eyah! It has to be


>>>natural and instinctive to be fun.


>I haven't found Lisp environments difficult to use at all.

A lot of this is Baby Duck syndrome. I'm a long time ancient vi(1)
hacker, and Emacs is NOT vi, so there is a bit of a learning curve
there beyond moving the point around.

But, more importantly, beyond simply saving the buffer and
(LOAD ...)ing the file, there is the plethora of SHIFT-ALT-CTRL yada
yada yada commands that interface Emacs to the Lisp environment. A bit
more learning curve.

>>Lisps biggest weakness, now, for me, is it's hard to see the tree for
>>the forest. Scheme doesn't help because I usually can only find seeds
>>and pine cones.

>I'm not quite sure what you mean by that. Would you mind
>elaborating?


Simple analogy.

I like taking new folks out to lunch to a local burger stand. Why? The
food is good for one, but importantly, they have, effectively ZERO
selection. Basically an orthoganal mix of fried burger patties,
cheese, lettuce, tomatoes, onion.

No chicken, no fish, no Gyros, no "Smoked Salmon Salads". When you
have very few choices, decisions are REAL simple. With new people, I
don't have to explain the entire menu to them, so it's a quick
process.

Now, Lisp, as we are all very aware, is a large language with zillions
of options and utilities (so's Emacs for that matter).

Scheme is MUCH smaller. Scheme is so small, that a lot of functionality
needs to be provided by the coder or through a library. That's an
observation.

For the environments *I* work in, and the applications *I* tend to
work on, Scheme is too limited, and I'm lazy enough to not be
motivated to write all of my own stuff. Seeds and pine cones.

Lisp is big big big. There are, what, a half dozen (at least)
DIFFERENT ways to iterate in the language (do, loop, recursion,
dotime, dolist, map...). I'm starting my Lisp training with a 20
drawer roller cabinet FILLED with tools to use, instead of just a
hammer and a screwdriver. *I* find all of these options confusing as
I'm indecisive as to which to use when.

With more experience, it'll be natural, but right now Lisp is
unintuitive to me because I'm always second guessing half the things I
do. DEFSTRUCT, or DEFCLASS? A-List or Hash table? List or vector?
Everything works with everything. Sure there are square pegs and round
holes, but they're all made of foam, so...press to fit.

When you're used to other languages, when you've focused your problem
solving style on using a hammer and screwdriver, and some guy gives
you a roller cabinet AND a lathe and raw stock to make more, you can
get a bit overwhelmed.

>Yeah, GUI stuff is a dreadful bore. I can't think of anything
>more tedious than writing some GUI front end, except maybe
>fixing build errors for a Fortran or Cobol program.

Yeah, but at least the errors you're fixing don't go away when they
user changes his font size :-).

Mike Mcdonald

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

In article <6e528d$lbj$1...@masala.cc.uh.edu>,

wan...@exploited.barmy.army writes:
> In article <vfr750Ep...@netcom.com>,
> Will Hartung <vfr...@netcom.com> wrote:
>
> [Snip]
>

>

>>Lisps biggest weakness, now, for me, is it's hard to see the tree for
>>the forest. Scheme doesn't help because I usually can only find seeds
>>and pine cones.
>
> I'm not quite sure what you mean by that. Would you mind
> elaborating?

My interpretation of his remark is that, by design,
scheme gives you just enough so you can build
anything you want. But it's your responsibility to
build it. (Or figure out why the various pieces of
slib don't work together. (Old experiences.)) Common
Lisp, on the other hand, gives you everything you
could ever possibly want. But then you have to figure
out what subset is the right portion for the task at
hand. Sometimes, one has the desire for something in
between the two. (I'm more apt to use CL than scheme.
I'd rather prune than gather. But that's just me.)

Mike McDonald
mik...@mikemac.com


Martin Rodgers

unread,
Mar 11, 1998, 3:00:00 AM3/11/98
to

Christopher Lee wheezed these wise words:

> I use both Scheme and C for my programming. As much as I like Scheme
> and use it as much as I can, it doesn't seem to make sense for many
> different kinds of applications. Here are a few things I have done
> recently for which C has seemed the best solution:

I can certainly say there are times when it's easier to use a tool
that already exists, rather than develop something new in Lisp. Lisp
_can_ win big, but some jobs are just too small for Lisp to make a
difference. And then there's:



> - Real-time robot control code (I actually wrote a simple embedded
> real-time Scheme interpreter for _high-level_ control of the
> robot, but I would never use it for computing real-time robot
> dynamics and control equations at a lower-level).
> Granted, I could use something like Ada and possibly Forth for
> this, but I wouldn't use Lisp.

Interpreted? Hmm. I realised very quickly (even before I began coding
in Lisp) that an interpreter can be used for development, but a
compiler may be better for delivery. In some cases, it can be worth
writing a specialised interpreter and compiler, to support what ever
domain you're working with. Robot control sounds like an excellent
example. For me, it used to be parsers. The code can be tested with an
interpreter (initially, this was all I had), and later compiled to C
or assembly language.

I also like the idea of compiling Lisp to Forth. Build a vocabulary
for the domain you're using, in Forth. Build an interpreter for the
same domain, in Lisp. Then write a compiler for the same domain
specific language, and use your Forth vocabulary as the target.

When Forth programmers do this, they don't necessarily need to make
the distinction between development and delivery in this way. It may
not even be necessary in Lisp, if you use a native code compiler.
However, for real time control you may certainly wish to make such a
distinction, and this is easy enough.

See the Persona project for an example of a declarative definition of
a problem that compiled into C++ animation code:
http://www.research.microsoft.com/

> - Writing a real-time garbage collector for a Scheme interpreter
> (this requires interfacing with the OS for memory allocation,
> laying out heaps of conscells in the allocated memory, and all
> sorts of nasty bit-twiddling in unused least-significant-bits of
> memory pointer representations for space efficiency).

Do you really need to write your app using a Scheme interpreter? Why
not write it in a language implemented in Scheme? Perhaps this is a
cope-out, but I'd want to use a domain specific language anyway, so
this seems like not such a strange thing to do. I've seen it done by
C++ programmers - even MS use domain specific languages! The only
problem is that they tend not to compile to C, C++, or even bytecodes.
Still, they serve a valuable purpose by being highly domain oriented.
They save the programmer a lot of time writing (and later reading)
code to, say, create a user interface by building lots of window
objects.

> Writing this in Lisp makes no sense to me. I'm not even sure
> this would be too easy to do any other language besides C
> (I want portability, so assembly language isn't an option).

You can very simply write a compiler for a specialised language that
uses C as the target language. Forth programmers do this with threaded
code, and sometimes even machine code. You could also write Lisp code
to compile a domain specific language to native code.

If I ever find the time, I may get around to working a audio synth
definition language, for describing the "modules", their settings, and
connections. Effectively, a modular synth in software. The idea is for
it to compile to either native code or assembly language. I'd like to
optimize the code by removing redundant instructions and exploiting
CPU pipelining as much as possible.

Even if the result is not realtime, like some soft synths, it should
be as fast as I can make it. Is that the same as writing a soft syth
in Lisp? How is that different from using, say, Music 11?

> - Animating an OpenGL rendering of a hand from a stream of
> finger-joint data from a Cyberglove for a virtual reality
> environment.

You might like to look at Conal Elliot's work:
http://www.research.microsoft.com/research/graphics/elliott/RBMH/

If this can be done in Haskell, then why not also Lisp? Note how
Haskell does things like I/O. When the language has difficulty doing
things directly, we can pass high level instructions on to simpler
code. I like the idea of compiling to bytecodes which are then
interpreted by code written in C/C++. Just wrap a bytecode engine
around the OpenGL calls, and write the Lisp code at a higher level.

Instead of writing the app directly in Lisp, write Lisp code that
writes the code. In other words, write a compiler.

Somebody once said that real programmers don't write comments, they
write code that writes comments. I feel the same way about C code.
Lisp programmers should write Lisp that writes C. ;)

Not that this is always easy to do! Nor is it always easy to convince
other people that this can be done. It's far easier to demonstrate
that it can be done and _why_ it should be done. Catch 22. You can't
do this until you've written it, and you might not get to write it
until you've convinced...Well, you get the picture. Perhaps you've had
this experience yourself.

Sometimes the real problem is showing that you can save time by doing
it this way. When there's a deadline looming ahead, it's amazing how
conservative people can become. When we've had the Lisp experience, we
find it can work the other way - but we need that positive experience.



> Knowing that I generally have a working C compiler available no matter
> what system I use or what else is going wrong with the system, is also
> extremely important.

This is why I like using C as the target language. Find (or write) a
Lisp to C compiler. I'm writing my own (or ressurecting an old one I
wrote a few years ago), mainly so I can avoid the assumptions made by
most Lisp compiler writers. I have to work with some very odd limits.
It occured to me a few years ago that my compiler should work _very_
well with hand held computers - once I get it finished.

> ps. I can't stand C++, but I'll admit I once thought it was cool.

I've never liked C++ much. There are a few features which might've
helped me once (6 years ago?), but now I have big doubts. I prefer to
use only a few of the C++ extensions to C. Perhaps this is just a
reaction to the rapid changes over the last few years, but I worry
when I see compilers give me "new behaviour" warning messages.


--
Please note: my email address is munged; You can never browse enough

Leif Nixon

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

vfr...@netcom.com (Will Hartung) writes:

> Now, Lisp, as we are all very aware, is a large language with zillions
> of options and utilities (so's Emacs for that matter).
>
> Scheme is MUCH smaller. Scheme is so small, that a lot of functionality
> needs to be provided by the coder or through a library. That's an
> observation.

Posts like this would be much easier to understand if people
would stop talking about Common Lisp as just "Lisp".

Scheme is a Lisp.
Elisp is a Lisp.
Interlisp is a Lisp.
Lithp is a Lisp.
Lisp is not necessarily Common Lisp.

If you're referring to Common Lisp specifically, please use
the specific term "Common Lisp", not the general term
"Lisp". This makes reasoning about different Lisps much
easier to follow.

--
Leif Nixon SoftLab AB
-------------------------------------------------
E-mail: ni...@softlab.se Phone: +46 13 23 57 61
-------------------------------------------------

Espen Vestre

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

b...@wetware.com (Bill Coderre) writes:

> So FUN might be "getting to translate your ideas into code efficiently,
> without having to spend a lot of time dealing with "bureacracy" code --
> stuff that isn't part of the problem at hand, but is required before you
> can see the results of your idea."

What strikes me as the most FUN part of Common Lisp programming these
days is the power that macros and CLOS give you to create extremely
compact and easy-to-read code. Just a few lines of code can do wonders!
And the other really FUN part of programming in Common Lisp is the
incremental and dynamic style of work: I am working on two different
TCP-server-applications, and they have both been up for more than
a month without any restarts - while I am rewriting and adding
code to them constantly! And my very latest Common Lisp Joy is to
start to use part of the MOP to make these programs redefine their
class-hierarchies run-time....

> Perhaps if C programs did all of the error-checking and memory-management
> they were supposed to, they would also be slower. (I'm sure there's cases
> where Lisp is doing TOO MUCH error checking, and that's resulting in
> unnecessary speed loss, but hey.)

It's not generally the case that C programs are _fast_. Quite in
opposite: The popular programs these days typically are _suprisingly_
_fat_ and _slow_. Some reasons for this might be:

1) a lot of code is programmed by really bad C newbies
2) too much has to be done in too short time
3) too many software companies hold hardware shares

But, if you think of it, (1) is partly a consequence of (2).
And if you think more of it, both (1) and (2) may partly be
caused by the choice of language. In Common Lisp fewer people
could do more in shorter time!

There will always be a need for programming at the level of C
or lower. But today, there's a lot of programming being done
in C that could have been done a lot better with Common Lisp.

IMHO...

--

regards,
Espen Vestre

Martin Rodgers

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

Leif Nixon wheezed these wise words:

> If you're referring to Common Lisp specifically, please use
> the specific term "Common Lisp", not the general term
> "Lisp". This makes reasoning about different Lisps much
> easier to follow.

This is a good point. I have more Lisps on my machine that Common
Lisps. Perhaps this is just a personal bias, but I've also read about
more Lisps than Common Lisps.

OTOH, I can understand why people might wish to distinguish Scheme
from any other Lisp, and Common Lisp from the non-ANSI Lisps. So I
take care to say "Lisp" where I mean _all Lisps_, "Common Lisp" when I
mean ANSI Common Lisp, and Scheme when I specifically mean Scheme. We
could be more specific, of course, as say CLtL2, ans?? (I've forgotten
the name), R4RS, R5RS, etc.

I suspect that this will always be confusing. Will we need to qualify
the meaning, every time we say "Lisp"? I hope not, but it may be hard
to get everyone to agree on a consistant meaning and make it clear
that they're doing so in every post.

When in doubt, ask for a clarification. Common Lisp does this, after
all. Perhaps we can, too. ;)

Martin Rodgers

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

Espen Vestre wheezed these wise words:

> 1) a lot of code is programmed by really bad C newbies

WHat they lack may be a deeper understanding of _programming_. I don't
think that C is the culprit here. In the past, I've asked if a "Lisp
for Dummies" might help. I still wonder if the lack of such books
explains the attitude of many people to Lisp. Lisp books are written
by experienced programmers _for_ experienced programmers. Even the
tutorials aimed at "beginners" assume a great deal of intelligence.

I'd say that K&R's C tutorial does the same thing, which is why some C
programmers claim that book is "too hard". No it isn't! If you can't
understand that book, then you should touch C! (However, see below.)
That book assumes you already know how to program. _All_ the other C
books I've seen don't teach you anything about programming, never mind
how to program in C.

Not all Lisp books have much to say about programming, either. "The
Little Schemer" is one that does this very well, as it encourages the
reader to think about why we code something a particular way. Is there
a Lisp book like "Elements of Programming Style"? No matter, as that's
a book about general programming issues that apply to any language.

> 2) too much has to be done in too short time

Necessity is another culprit. There's no time to train people, to make
sure programmers understand

> 3) too many software companies hold hardware shares

Ahh, a conspiracy theory! ;) We could also point a finger at Lisp
vendors, as some Lisp delivery systems also produce fat binaries.
Even if that's fixed overhead, the asusmption is that the developer
can afford it. As has been noted, this ain't always true.

> There will always be a need for programming at the level of C
> or lower. But today, there's a lot of programming being done
> in C that could have been done a lot better with Common Lisp.

A lot of people will suggest other alternatives to C. Today, the media
and certain vendors are going mad about the current favourite
alternative, Java.

While it's easy for us to laugh, IMHO we should be encouraging
programmers for taking an important step forward. It may be small from
our perspective, but any language that strongly supports garbage
collection is a major improvement over malloc. Java is now being
attacked in _exactly_ the same way that Lisp was, just a few years
ago. So, from the point of view of those who resist "new ideas" like
GC, Lisp and Java are practically the same. I find _that_ hilarious,
but I'm comforted by the high profile and success of Java.

It's also easy to sneer at programmers who are only now discovering
other neat ideas that have been known to Lisp folk for years. The
arguments with which we can knock Java can also be used to sell Lisp
to those who resist it. Instead of saying that Java is a poor person's
Lisp, we can say that Lisp is a better Java; older and more mature.

There are two types of fool: one says "This is old and therefore
good.", the other says "This is new, and therefore better."

Thus, we can still use C, embrace Java, and build on these tools in
Lisp. What are Java people doing? Building front ends to network apps?
Lisp looks like a fabulous language to use at the back end, while Java
is complimentary tool for the front end. C code can provide low level
support at either end. The best of all worlds?


--
Please note: my email address is munged; You can never browse enough

"Oh no" - Marc Riley

Espen Vestre

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

m...@this.email.address.intentionally.left.crap.wildcard.demon.co.uk (Martin Rodgers) writes:

> WHat they lack may be a deeper understanding of _programming_. I don't
> think that C is the culprit here.

no, definitely. I was unprecise, thinking "programming newbies
[accidently] using C", but writing "C newbies" ;-)

> Thus, we can still use C, embrace Java, and build on these tools in
> Lisp. What are Java people doing? Building front ends to network apps?
> Lisp looks like a fabulous language to use at the back end, while Java
> is complimentary tool for the front end. C code can provide low level
> support at either end. The best of all worlds?

This is a very good point, but it seems that the Java people are very
busy moving into the back end realm, so hurry up all you lisp programmers
and get your lisp servers and middleware out! (which reminds me that
I shouldn't spend time writing this ;-))

--

regards,
Espen Vestre

wan...@exploited.barmy.army

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

In article <vfr750Ep...@netcom.com>,
Will Hartung <vfr...@netcom.com> wrote:
>wan...@exploited.barmy.army writes:
>

[Snip]

>
>>I haven't found Lisp environments difficult to use at all.
>
>A lot of this is Baby Duck syndrome. I'm a long time ancient vi(1)
>hacker, and Emacs is NOT vi, so there is a bit of a learning curve
>there beyond moving the point around.
>

With all due respect, I find it hard to believe that anyone who
can use anything as obfusicated and unintuitive as vi would
find Emacs difficult! I mean you've got online help, and
only need a handful of commands to get going. Furthermore
if you're ruunning under a GUI, you've got a nice menu
system that's mouse driven.


>But, more importantly, beyond simply saving the buffer and
>(LOAD ...)ing the file, there is the plethora of SHIFT-ALT-CTRL yada
>yada yada commands that interface Emacs to the Lisp environment. A bit
>more learning curve.

If you are talking about Elisp (The lisp integrated within
emacs), there are only two commands you need to know (in
addition to load) to get started in Elisp coding (actually
only one is really needed for that matter):

C-x e -> Evaluate expression
M-x eval-buffer -> Evaluate the buffer

You can write programs using only those 3 commands (load and the two
above, and even then you can make do without one of them). When
you feel more comfortable you can look into the other commands
for moving around expressions, etc...

Source code formatting, paren matching, you get all that
automatically. So what's so difficult?

[Snip -- good analogy]


>Now, Lisp, as we are all very aware, is a large language with zillions
>of options and utilities (so's Emacs for that matter).

I don't want to sound nitpicky, but the name "Lisp"
denotes an entire family languages, of which Scheme is one.
I'm assuming you mean Common Lisp (which is quite large)?

In any case, I'm fond of Common Lisp's size. I'd rather
have way too much than a bit too little any day.


>
>Scheme is MUCH smaller. Scheme is so small, that a lot of functionality
>needs to be provided by the coder or through a library. That's an
>observation.
>

>For the environments *I* work in, and the applications *I* tend to
>work on, Scheme is too limited, and I'm lazy enough to not be
>motivated to write all of my own stuff. Seeds and pine cones.
>

I agree. I like Scheme's elegance, and better treatment of
functions, and intelligent (and consistent) naming conventions,
but when it comes to functionality, it just falls flat on its
face. As it turns out, functionality is *THE* most
important consideration (for me), so I use Common Lisp
instead.


>Lisp is big big big. There are, what, a half dozen (at least)
>DIFFERENT ways to iterate in the language (do, loop, recursion,
>dotime, dolist, map...). I'm starting my Lisp training with a 20
>drawer roller cabinet FILLED with tools to use, instead of just a
>hammer and a screwdriver. *I* find all of these options confusing as
>I'm indecisive as to which to use when.

Well keep in mind though that these iterations are quite
different, so it becomes much easier to pick one once you
read the description of each.

For example, mapcar applies an n-arity function to
n lists (using the car of each list as an argument to this function).
So it is used for list transformations. (Ie: converting
every element of a list to something else, like
incrementing every element of a list).

Things like dolist are better for side-effecting operations.

Things like map-into destructively modify their arguments.

So decide what you want to do with your iteration, and take
any paradigm considerations into account and things should be
much simpler.

For example, I prefer a functional style when programming
so I never use do, dolist, dotimes, etc... I only use
non-destructive list operations (no map-into, etc...)
and generalized recursion. This heavily reduces the number
of possibilities in my case.


>
>With more experience, it'll be natural, but right now Lisp is
>unintuitive to me because I'm always second guessing half the things I
>do. DEFSTRUCT, or DEFCLASS? A-List or Hash table? List or vector?
>Everything works with everything. Sure there are square pegs and round
>holes, but they're all made of foam, so...press to fit.
>

After a while it will be easier, but for now just keep in mind
what you want to do, how you want to do it, and any paradigms
you want to stick with and you should be ok.

Again, since I've found maps, reductions, and filters to be so
useful for what I'm doing, I'll often pick embedded lists over
defstruct and defclass.

I'm not suggesting you do the same, I'm merely giving you an
example of how easy my choices are once I've made my mind
as to a paradigm to follow and how I want to get my job done.


>When you're used to other languages, when you've focused your problem
>solving style on using a hammer and screwdriver, and some guy gives
>you a roller cabinet AND a lathe and raw stock to make more, you can
>get a bit overwhelmed.


Sure, but for the time being you can just pick out a hammer and
screwdriver and set everything else aside and still get
the job done, and learn about all the other goodies later.

The only problem with having too many tools is in picking
the best one for the job. You can still easily pick
a tool for the job, it just may not be the best one.


>
>>Yeah, GUI stuff is a dreadful bore. I can't think of anything
>>more tedious than writing some GUI front end, except maybe
>>fixing build errors for a Fortran or Cobol program.
>
>Yeah, but at least the errors you're fixing don't go away when they
>user changes his font size :-).

:)

>
>--
>Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
>1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
>1993 Explorer - Cage? Hell, it's a prison. -D. Duck


--
Regards,
Ahmed

My email address has been altered to avoid spam. To email me,

wan...@exploited.barmy.army

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

In article <pdBN.350$hL4.2...@typhoon.texas.net>,

Dan Higdon <hd...@charybdis.com> wrote:
>wan...@exploited.barmy.army wrote in message

>>Well at work I'm restricted by something a little more down to


>>earth. They use C/C++/Fart-ran (misspelling intentional),
>>so I'm stuck using them. At home, Lisp does everything
>>I want it to do with perfectly acceptable performance so
>>I don't have to stain my hands over there.
>
>Looks like we're kindred spririts of a sort, substituting
>SML for Lisp. :-) I still can't believe I'm defending C/C++,
>since I'm their biggest detractors here at work. Oh well,
>back to it....

Looking through your post I can agree that we seem to
birds of a feather (is this the start of a beautiful friendship? :)).
I like quite a few languages, pretty much functional languages
although this little chat has convinced me to give Forth another
chance, if for nothing else than fun (I've already downloaded like
a billion interpretors :)).

I wouldn't mind being a C/C++ detractor at work if I only
had the chance to do some detracting!


>
>>Scheme IS Lisp. It's a dialect of Lisp. You are confusing
>>a family of languages (Lisp) with specific implementations
>>(like Common Lisp). I understand what you are trying to say,
>>but you should be more specific to avoid confusion.
>
>Absolutely - a subtle but important difference. (Chalk it up to
>a very long day.) Common Lisp was the dialect I specifically
>don't care for, although it does seem to be the most capable
>of "real world" development, due to its extensive tools.

No problemo, I may have made that mistake in the past too,
and given that Common Lisp does seem to be the most
popular lisp dialect, it's no surprise that it is
often thought of as "Lisp".

I like Scheme's elegance and consistency but as you said
Common Lisp is the most capable of "real world" development.
Some people like to brag about Scheme's small size, what they
neglect to mention however is that the size is due to a
serious lack of functionality.

What I wouldn't give for a Scheme with Common Lisp power.


>
>>Have you by any chance tried Haskell? That has some serious
>>type checking and type inferencing, and IMO a nicer syntax
>>than the ML family of languages. It is purely functional
>>so when you get to IO you'll have to deal with monads, but
>>it's a great language and also tons of fun.
>
>Yes - actually I got my start in non-lisp functional languages with
>Gofer, a dialect of Haskell. I agree - I really like the syntax,
>especially the "literate" versions, where the "normal" text was
>comments, and you had to mark the source lines!

Hugs (another dialect of Haskell) was my first introduction to the
glorious world of Functional programming. Granted, I had used
Common Lisp before in AI, and , but it was Haskell that
introduced me to the functional paradigm, which I continue to use in
Lisp (Scheme, Common Lisp, Elisp).


>In the end, I'm using SML because I like the fact that it's not
>quite as pure functional, and it's not lazy. (Although you can
>simulate lazy evaluation like you do in Scheme - with promises.)
>Lazy languages seem a little tricky to reason about mathematically.
>Perhaps that's just because I'm a little out of touch with modern
>CS practices.

Are promises a completely accurate simulation of lazy
evaluation ala Haskell? I mean in Haskell you can
use things like infinite sized lists without a problem
because the system will only access as much of the
list as is needed. Do promises do that as well?


>
>>Yes I have. I've programmed in VAX PDP-11, x86 and 65xx
>>(C64 in particular -- 6510) assembly languages.
>
>I stand corrected. (Gotta love the 6502 stuff - first RISC chip, IMHO)

Tell me about it! What I especially loved about the C64
in particular was that game support was built in, and very
easy to use. Hardware sprites, collision detection, it was
all there.


>
>>Well I see nothing enjoyable or even worthwhile in C, so I guess
>>we'll just have to agree to disagree.
>
>Agreed. That's why there are so many computer languages. :-)

:). And finding out about all these languages is almost as much
fun as coding in them. Keeps me off the street :).


>
>>I remember reading about a C interpreter a LONG time ago.
>>It may have been a sort of garage project in somebody's
>>spare time, but I thought you'd be interested in knowing
>>that there may have been such a beast somewhere :).
>
>Yep, I remember that one too. Strange but interesting idea.
>I don't remember how well it worked.

I was lucky to even remember that it existed :).
This was back in my BBS days, before I had internet access,
almost a decade before. I used to BBS with a Tandy 1000EX
and a 1200 Baud modem. Ahhh, the bad old days :)


>
>>Comparing C and Lisp is fair, since both are general purpose
>>languages. Just because C is a portable psuedo-assembler
>>does not make it immune from comparison or criticism. We're
>>talking about getting the job done, and intentions mean nothing now,
>>only results.
>
>Fair enough. That would be like holding Lisp's symbolic processing
>nature against it for numeric or data processing applications.


Lisp can do numeric and data processing perfectly well.
In particular using mappings, filters, and reduction functions (they
are generalized to work over arbitrary sequences) can make such tasks
for better suited for Lisp than most other languages.


>
>>As for a GIF decoder, since I'm not sure what the algorithm
>>is (I suppose I could find out), I can't say whether or not
>
>It involves a lot of pointer arithmetic and such. You could certainly
>do it in Lisp, but my argument is that expressing memory accesses
>and pointer arithemetic is cleaner in C than Lisp, since C has
>notations to directly support these activities. Much like lisp has
>notation for list handling, which is mildly painful in C.
>

Well the pointer operations -- are they for accessing devices
as memory (like screen buffers) or data (array accesses)?
I can't see a problem with Lisp using its own data structures to
handle this, rather than playing with pointers.

Of course as I've said, I'm not familiar with the GIF
decoder algorithm. I've done RLE, and that's about it.


>>The one thing that got to me about Forth (and this may be due to my
>>own ignorance of the language at the time) was the inability to
>>get at any data lower than 3 levels down on the stack without
>>popping everything before it off. Things like rotate and swap
>[snip]
>
>Most Forths have and 'index' word that you can use to go arbitrarily
>deep into the stack. Modern Forth dialects even let you declare
>"parameters", which become local words that produce the value
>passed in, although that's kind of "cheating" IMHO. :-)

Yeah, it would seem to take the fun out of it :). Another
kind person (whose name escapes me at the moment) was kind enough to
send me email, informing me about some other words that
could do what I was complaining about, but informed me that
this should be largely unnecessary if I factor my code
properly.

>>Hmmm, not even Allegro Common Lisp?
>
>Don't know about that one, so I may be wrong about that. It's likely that
>Allegro Common Lisp wouldn't fit my memory footprint though.
>(Still doesn't validate my argument here - it would be interesting to
>know if Allegro can call DirectX. I'd better do some research....)

I've got a free (limited) version of Allegro at home on a
P-100 with 16 Megs of RAM running Win95. Allegro has visual
GUI design capabilities, and a nice enviornment. I don't
know what the minimum system requirements were, but the
program ran pretty decently (for what little I did with it).
I only suggest that it may have a DirectX interface
because it has GUI capabilities, so it seems as good
a choice as any.

Check out http://www.franz.com for more details.

[Snip]

>I think CLOS is a good example of how Lisp can be cleanly
>expanded to new programming models without kludging up
>the original language. One of the advantages of not really
>having a syntax, I suppose. :-)

Pretty much :). The fact that we have introspective
capabilities, closures, a very consistent function-driven
syntax, and a powerful macro system means that extending
the language is very simple.

>
>(I've only ever looked at CLOS, so I have no real experience
>with it, and may be completely off-base)

I'm no expert at CLOS either :)


[Snip]

>
>"Obfuscation is in the eye of the beholder". I think you're
>referring to rampant abuses of the pre and pos-increment
>operators , such as "while ((*a++ = *b++) != '\0');" for
>a string copy. Yeah, it's ugly, but most C programmers
>understand it as an idiom. I personally would use strcpy(),
>but I try to focus on readability. Probably because I'm a
>freak who likes to program in functional languages on his
>off time. :-)
>

Well I can understand what what the line is doing, but
it's still bad programming practice. You're cramming
a loop condition with a loop body, and anybody with
a clue will tell you that this is complete crap.

>>But these tend to be low level considerations that seem
>>to rely on architectural-dependent issues. Can you say
>>that these tricks will guarantee faster code on every
>>platform?
>
>No, indeed you can't. That's why I shy away from them for
>the most part. But, I sometimes find myself disassembling code
>to see if I can trick the compiler into producing better code for
>a given expression. After a while, you find out what works and
>what doesn't for your compiler.
>
>Actually, that last paragraph may be the most damning criticism
>of C/C++ I can think of! Imagine having to program in a language
>where second-guessing the code generator was even something
>you were TEMPTED to do! Scary. Scarier still is that I've often
>wondered how to disassemble SML functions. It's a sickness -
>I think I might need professional help. :-)
>

You do need help, and fast :)


[Snip]

>
>Like Java! :-) I've followed a lot of the so-called practical languages,
>like Eiffel, Dylan and Oberon in hopes that a better GP languages
>might catch on, but so far, it's been all for naught. Lisp languages
>have the disadvantage that many find their syntax (or lack thereof)
>difficult to read, which is probably the largest obstacle to Lisp world
>dominance. Look at Java - I suspect the reason people even consider
>looking at it is that it looks just like a light version of C++.
>

While I'll be the first to state that Java is overrated,
I will also be the first to state that it is better than
C/C++. It does have some decent features (like all
objects being references), but also some idiotic
shortcomings (primitive data types are not objects).
Overall, I think that even if it is adapted, it will be a
step in the right direction (at least it has garbage
collection).


>Sigh. I suppose most of us will have to continue writing fun programs
>in fun languages in our off-time, and slog through cruddy, cryptic
>languages at work.
>

Unfortunately it looks like this will be true for some time to
come.


>----------------------------------------
>hd...@charybdis.com
>There's no one left to finger
>No one here to blame
>
>

--
Regards,
Ahmed

My real email address is punkrock at cs dot uh dot edu

David H Wild

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

In article <MPG.f720d97f...@news.demon.co.uk>,
Martin Rodgers
<m...@this.email.address.intentionally.left.crap.wildcard.demon.co.uk>
wrote:

> Lisp books are written by experienced programmers _for_ experienced
> programmers. Even the tutorials aimed at "beginners" assume a great
> deal of intelligence.

I know what you mean, but I think that it's really *background knowledge*
rather than intelligence.

--
__ __ __ __ __ ___ _____________________________________________
|__||__)/ __/ \|\ ||_ | / Acorn Risc_PC
| || \\__/\__/| \||__ | /...Internet access for all Acorn RISC machines
___________________________/ dhw...@argonet.co.uk
Uploaded to newnews.dial.pipex.com on Thu,12 Mar 1998.21:51:53


Martin Rodgers

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

Espen Vestre wheezed these wise words:

> no, definitely. I was unprecise, thinking "programming newbies


> [accidently] using C", but writing "C newbies" ;-)

This is what I thought you meant. ;)



> > Thus, we can still use C, embrace Java, and build on these tools in
> > Lisp. What are Java people doing? Building front ends to network apps?
> > Lisp looks like a fabulous language to use at the back end, while Java
> > is complimentary tool for the front end. C code can provide low level
> > support at either end. The best of all worlds?
>
> This is a very good point, but it seems that the Java people are very
> busy moving into the back end realm, so hurry up all you lisp programmers
> and get your lisp servers and middleware out! (which reminds me that
> I shouldn't spend time writing this ;-))

Indeed, they are! The Golden Horde are almost upon us. Well, it
sometimes seems like it. History may record a different story.


--
Please note: my email address is munged; You can never browse enough

"Oh knackers!" - Mark Radcliffe

Mike Dunn

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

wan...@exploited.barmy.army wrote:
> In article <pdBN.350$hL4.2...@typhoon.texas.net>,
> Dan Higdon <hd...@charybdis.com> wrote:
> >wan...@exploited.barmy.army wrote in message
>
> >>Hmmm, not even Allegro Common Lisp?
> >
> >Don't know about that one, so I may be wrong about that. It's likely that
> >Allegro Common Lisp wouldn't fit my memory footprint though.
> >(Still doesn't validate my argument here - it would be interesting to
> >know if Allegro can call DirectX. I'd better do some research....)
>
> I've got a free (limited) version of Allegro at home on a
> P-100 with 16 Megs of RAM running Win95. Allegro has visual
> GUI design capabilities, and a nice enviornment. I don't
> know what the minimum system requirements were, but the
> program ran pretty decently (for what little I did with it).
> I only suggest that it may have a DirectX interface
> because it has GUI capabilities, so it seems as good
> a choice as any.

Just a student here, please ignore me if I don't know what I'm talking
about. :)

I'm doing a project (robotic simulation) using Allegro CL and Direct-X.
It is fairly easy to do using DLL function calls. Allegro has filled
all my needs so far (except for a terrible help system, IMHO). But
then, Common Lisp is a difficult language to learn, it's not jest the
help system's fault. :)

Brian Denheyer

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

Christopher Lee <chri...@ri.cmu.edu> writes:

>
> - Real-time robot control code (I actually wrote a simple embedded
> real-time Scheme interpreter for _high-level_ control of the
> robot, but I would never use it for computing real-time robot
> dynamics and control equations at a lower-level).
> Granted, I could use something like Ada and possibly Forth for
> this, but I wouldn't use Lisp.

Couldn't compiled scheme be used here. I am not clear on whether this
is a language limitation you are alluding to pr a performance
limitation.

> - Animating an OpenGL rendering of a hand from a stream of
> finger-joint data from a Cyberglove for a virtual reality
> environment.

See above.

>
> Knowing that I generally have a working C compiler available no matter
> what system I use or what else is going wrong with the system, is also
> extremely important.

True portability of C is nice. The only thing that seems to be a
problem in scheme portability is r5rs macros and the numeric tower
(engineers need complex #'s !). Neither seems to be very important if
you are mostly concerned with real-time sort of things.

slib seems to adequately handle all most other needs, although I
confess that python is VERY nice in that regard,
i.e. w.r.t. libraries.

> ps. I can't stand C++, but I'll admit I once thought it was cool.

Ditto.

--

Brian Denheyer
bri...@northwest.com


Ken Nakata

unread,
Mar 12, 1998, 3:00:00 AM3/12/98
to

m...@this.email.address.intentionally.left.crap.wildcard.demon.co.uk (Martin Rodgers) writes:
[...]

> I'd say that K&R's C tutorial does the same thing, which is why some C
> programmers claim that book is "too hard".

If they claim that, they are no "C programmers".

> No it isn't! If you can't
> understand that book, then you should touch C! (However, see below.)

^ maybe you forgot a "not" here?

[...]


> While it's easy for us to laugh, IMHO we should be encouraging
> programmers for taking an important step forward. It may be small from
> our perspective, but any language that strongly supports garbage
> collection is a major improvement over malloc.

I would strongly second what you state here IF "malloc" were "free"
instead. Who hates cons? It's free() which I hate to have to call.

> Java is now being attacked in _exactly_ the same way that Lisp was,
> just a few years ago. So, from the point of view of those who resist
> "new ideas" like GC, Lisp and Java are practically the same. I find
> _that_ hilarious, but I'm comforted by the high profile and success
> of Java.

It seems to me that people have to be taught to trust GC, or the
simple fact that GC does it better than most of us do (how many times
did you hear about some server program leaking memory? And how many
products are there to detect memory leakage in C programs?). I've
seen someone so worried about when a Java instance is destroyed. I
just couldn't see why it was so important. Perhaps he was
braindamaged by extended exposure to C++ which he seemed to have.

Ken
--
Any unsolicited message soliciting purchase of any product or service
sent to any of my accounts is subject to a $50 handling charge per
message. You have been notified.

Dan Higdon

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

wan...@exploited.barmy.army wrote in message
<6e9h3f$4t9$1...@Masala.CC.UH.EDU>...

>I like quite a few languages, pretty much functional languages
>although this little chat has convinced me to give Forth another
>chance, if for nothing else than fun (I've already downloaded like
>a billion interpretors :)).

<GRIN!> Like I said, I think the world may have passed Forth by.
Just my opinion, of course - chances are it's just me that's passed
Forth by. I've become accustomed to named variables, parameter
lists and strong type-checking. :-)

>I like Scheme's elegance and consistency but as you said
>Common Lisp is the most capable of "real world" development.
>Some people like to brag about Scheme's small size, what they
>neglect to mention however is that the size is due to a
>serious lack of functionality.

That's one reason I don't mess around with Scheme as much anymore.
I still read the newsgroup though, because I love Scheme's concept and
the sorts of discussions that arise from contemplating Scheme.

>What I wouldn't give for a Scheme with Common Lisp power.

Amen!

>Hugs (another dialect of Haskell) was my first introduction to the
>glorious world of Functional programming. Granted, I had used
>Common Lisp before in AI, and , but it was Haskell that
>introduced me to the functional paradigm, which I continue to use in
>Lisp (Scheme, Common Lisp, Elisp).

Yep. I started out with some nameless Lisp interpreter for my
Apple ][ in the mid-80s, and then "discovered" Common Lisp
at UTexas. (I also did my share of Icon, Prolog, and Smalltalk,
but they were all to "special purpose" to be generally useful, IMHO.)

I gotta admit, though, that the Haskell 'where' syntax a lot. Much
nicer than SML's let...in...end (which works almost identically
to Lisp's (let...) form, for those who don't program in SML or Haskell.)

>Are promises a completely accurate simulation of lazy
>evaluation ala Haskell? I mean in Haskell you can
>use things like infinite sized lists without a problem
>because the system will only access as much of the
>list as is needed. Do promises do that as well?

Nope, not at all. If you know Scheme promises, you know what
I'm talking about. You can evaluate (define p (promise (lambda (...)
...))),
and get back a "promise" type. It won't be evaluated until you use
(force p) on it. At that point, p evaluates to the result of the lambda
expression, and all subsequent (force p)'s will return that same value
without recomputing. With this and some clever programming, you
can implement lazy lists (with lazy-car, lazy-cdr, whatever) that can
process these infinite lists. As far as "real" lazy programming goes,
you're out of luck.

>>I stand corrected. (Gotta love the 6502 stuff - first RISC chip, IMHO)
>
>Tell me about it! What I especially loved about the C64
>in particular was that game support was built in, and very
>easy to use. Hardware sprites, collision detection, it was
>all there.

Yep, we're just now starting to catch up with 3D games - all the
C64 stuff worked great for 2D games.

>>Agreed. That's why there are so many computer languages. :-)
>
>:). And finding out about all these languages is almost as much
>fun as coding in them. Keeps me off the street :).

You and me both. I catch a decent level of friendly hostility from
my coworkers for being a "language hound". They don't understand
what a conceptual advantage it is to understand all the
paradigms embodied by different language designs (functional,
lazy, logic/Horn clause, etc.)

>Lisp can do numeric and data processing perfectly well.
>In particular using mappings, filters, and reduction functions (they
>are generalized to work over arbitrary sequences) can make such tasks
>for better suited for Lisp than most other languages.

C's Fortran-like expression syntax is convenient. Of course, one
of the first macros I wrote was and infix macro, that worked something
like: (infix 3 + 2 * x) => (+ 3 (* 2 x)). Try THAT in C. :-)

>Well the pointer operations -- are they for accessing devices
>as memory (like screen buffers) or data (array accesses)?
>I can't see a problem with Lisp using its own data structures to
>handle this, rather than playing with pointers.

No, just memory/array access. And there's nothing magic about
GIF decoders, I was just saying that C's syntax is more streamlined
for memory access and linear processing than Lisp. You may
only agree that it's more terse and confusing, however. :-)

>I've got a free (limited) version of Allegro at home on a
>P-100 with 16 Megs of RAM running Win95. Allegro has visual
>GUI design capabilities, and a nice enviornment. I don't

[snip]

I tried that one out, but never had time to really pound on it
enough for full evaluation. It did look really nice, however.

>Pretty much :). The fact that we have introspective
>capabilities, closures, a very consistent function-driven
>syntax, and a powerful macro system means that extending
>the language is very simple.

Lisp's greatest strength, if you ask me. Otherwise, I see Lisp
as just another non-pure functional language without static
type checking. :-)
(let's see how many black eyes I get for THAT comment!)

>>"Obfuscation is in the eye of the beholder". I think you're
>>referring to rampant abuses of the pre and pos-increment
>>operators , such as "while ((*a++ = *b++) != '\0');" for

>>a string copy. Yeah, it's ugly, but most C programmers...


>
>Well I can understand what what the line is doing, but
>it's still bad programming practice. You're cramming
>a loop condition with a loop body, and anybody with
>a clue will tell you that this is complete crap.

But it's still an idiom, even with the loop invariant slammed
in there with the mutator condition.

>You do need help, and fast :)

That is a distinct possibility. :-)

>While I'll be the first to state that Java is overrated,
>I will also be the first to state that it is better than
>C/C++. It does have some decent features (like all
>objects being references), but also some idiotic
>shortcomings (primitive data types are not objects).

Yes, it is a better language. Once the implementations
catch up, I hope it gets used more.

>Overall, I think that even if it is adapted, it will be a
>step in the right direction (at least it has garbage
>collection).

Maybe Java will finally be the language that teaches the
world that automatic memory management is a GOOD thing.
Best of all, Apple can't kill it like they more-or-less did Dylan. :-/

Then, they'll come to see the beauty of the classical GC'ed
language - Lisp! (Ok, that was really cheezy, but the sentiment
holds true.)

Rob Warnock

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

<wan...@exploited.barmy.army> wrote:
+---------------

| >simulate lazy evaluation like you do in Scheme - with promises.)
|
| Are promises a completely accurate simulation of lazy
| evaluation ala Haskell? I mean in Haskell you can
| use things like infinite sized lists without a problem
| because the system will only access as much of the
| list as is needed. Do promises do that as well?
+---------------

Well, in some ways they're even lazier, since you *must*
"force" a promise to get it to evaluate itself. [To be more
precise, the standard doesn't *require* auto-forcing, but
it does *allow* implicit forcing by primitives. But few
implementations do that.]

But in general, yes, Scheme promises allow straigtforward construction
of virtually-infinite objects of which only the forced subset is manifest
(e.g., a list of the positive integers, or a list of primes).

However, note that Scheme requires that a forced value be memoized
(cached) so that if it's forced again it's not recomputed. Thus Scheme's
promises may differ from forms of lazy evaluation which allow unrestricted
side effects, e.g.:

> (define foo
(let ((x 0))
(delay (begin (set! x (1+ x)) x))))
> foo
#<promise>
> (force foo)
1
> (force foo)
1

Contrast this with a closure with local state ("forced" by calling it):

> (define bar
(let ((x 0))
(lambda () (set! x (1+ x)) x)))
> (bar)
1
> (bar)
2


-Rob

p.s. IMHO, Scheme promises are a gross hack which would have been totally
unecessary if Scheme had had even a simplistic macro facility required
in the base language [before R5RS]. The only thing "delay" *really* gives
you is a little syntactic sugar for writing an unevaluated expression.
R5RS recogizes this by downgrading "delay" & "force" from "syntax" &
"procedure" to "library syntax" & "library procedure" (meaning they can
be readily expressed in terms of "more primitive" required features).

-----
Rob Warnock, 7L-551 rp...@sgi.com http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd. FAX: 650-933-4392
Mountain View, CA 94043 PP-ASEL-IA

Ben Caradoc-Davies

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Christopher Lee (chri...@ri.cmu.edu) wrote:
: I use both Scheme and C for my programming. As much as I like Scheme

: and use it as much as I can, it doesn't seem to make sense for many
: different kinds of applications. Here are a few things I have done
: recently for which C has seemed the best solution:
: - Real-time robot control code (I actually wrote a simple embedded

: real-time Scheme interpreter for _high-level_ control of the
: robot, but I would never use it for computing real-time robot
: dynamics and control equations at a lower-level).
: Granted, I could use something like Ada and possibly Forth for
: this, but I wouldn't use Lisp.

You might be interested in Aubrey Jaffer's (author of scm) web pages.
http://www-swiss.ai.mit.edu/~jaffer/Work.html

Aubrey describes writing the low-level part of an NT device driver in Scheme,
which is then automatically translated to C.

--
Ben Caradoc-Davies

Be sure to include a "cookie-for-procmail" in any email sent to me.

Rob Warnock

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Ben Caradoc-Davies <bm...@physics.otago.ac.nz> wrote:
+---------------

| You might be interested in Aubrey Jaffer's (author of scm) web pages.
| http://www-swiss.ai.mit.edu/~jaffer/Work.html
|
| Aubrey describes writing the low-level part of an NT device driver
| in Scheme, which is then automatically translated to C.
+---------------

FWIW, note that Jaffer recently added explicit BSD-style use/redistribution
permission to the copyright on "Schlep", his Scheme compiler:

ftp://ftp-swiss.ai.mit.edu/pub/users/jaffer/schlep.scm

Also note that "Schlep" makes no pretense of being a fully-general
Scheme compiler -- it's really just a tool to let Aubrey write
(and test) the software *he* needs in Scheme, yet deliver it in
(fairly human-readable) C, sometimes linking with C code others write.

As such, perhaps it should be thought of as a "template" of a compiler
or a metacompiler that one bends & tweaks as needed for the project at
hand. (If you take snapshots of Schlep every few months, you'll see
that's what he seems to be doing himself.)

All I'm saying is that it will almost certainly "need some assembly"
[significant local modification] to be useful for anyone else than
Aubrey. (Which didn't detract at all from its usefulness to *me*!)


-Rob

Martin Rodgers

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Ken Nakata wheezed these wise words:

> > I'd say that K&R's C tutorial does the same thing, which is why some C
> > programmers claim that book is "too hard".
>
> If they claim that, they are no "C programmers".

Agreed. I certainly wouldn't employ them as C programmers.



> > No it isn't! If you can't
> > understand that book, then you should touch C! (However, see below.)
> ^ maybe you forgot a "not" here?

Well spotted! A small but significant word. Thanks.

> I would strongly second what you state here IF "malloc" were "free"
> instead. Who hates cons? It's free() which I hate to have to call.

Or malloc/free? I like a language to define constructor functions for
me. When I use C, I have to write them myself. So I dislike malloc as
well as free. A type system that can check for type erros either at
compile or run time. C never does this - the programmer must do it.

Note the type of the object that malloc returns. Awooga! Awooga!



> It seems to me that people have to be taught to trust GC, or the
> simple fact that GC does it better than most of us do (how many times
> did you hear about some server program leaking memory? And how many
> products are there to detect memory leakage in C programs?). I've
> seen someone so worried about when a Java instance is destroyed. I
> just couldn't see why it was so important. Perhaps he was
> braindamaged by extended exposure to C++ which he seemed to have.

The finalisation issue. It's believed necessary for C++ objects to
clean up themselves _and_ to remember any other objects they refer to.
If you've ever managed a disk without a file system, you should be
able to appreciate that a GC serves a similar role.

I don't see why we have to make such a big distinction between one
level in the memory heirarchy and another, yet this is exactly what
many C/C++ programmers do. The last time I had to explain this to one
of these people, I discovered that he assumed that because malloc can
return NULL, this is somehow significant; that allocating on the heap
is vulnerable to a "memory full" error, while allocating on the stack
is not. I pointed out that even stack memory is finite. He shut up.

George Orwell, in 1984, made a point about how language defines how we
think. (This is one of the more accessible examples of this idea, so
it should be ideal for recommending to programmers who've not thought
about how this applies to computing.) If you don't have a word for
"unhappy", you can only say "not happy". Similarly, if we don't have
an error condition for stack overflow (or we're not aware of it), then
we may - wrongly - assume that this can never happen.

I think that this is the error that some languages encourage us to
make. Perhaps this could be a criticism of languages like Lisp, which
tell us that objects persist indefinitely. In practice, we use GC to
transparantly recover memory that is no longer used. If the lack of an
error condition for "heap full" is a valid criticism of a language,
then it also applied to filesystems, which many would deny.

The truth is morely that the programmer must take responsibility for
handling errors, where ever they may occur. A "disk full" error can
crash a program if it can't recover, which leads us neatly to the
Halting Problem. So this isn't just a language issue. ;)

Meanwhile, ANSI Common Lisp addresses the "heap full" error with
storage-condition. I've no idea what ANSI C++ will have.
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"As you read this: Am I dead yet?" - Rudy Rucker
Please note: my email address is gubbish

Michael Hobbs

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Dan Higdon wrote:
>
> [Many good comments about using intelligent higher-level languages]

>
> Sigh. I suppose most of us will have to continue writing fun programs
> in fun languages in our off-time, and slog through cruddy, cryptic
> languages at work.
>

Yes, but how much do you get paid because you are able to understand the
cruddy, cryptic language? :-) I sometimes enjoy thinking about the
analogy of computer programmers as priests (in the ancient sense). We
are highly respected (and well paid) because we are able to understand
the mystifying oracles and omens that the average person is unable to
comprehend.

I know this statement is incredibly cliche, but it fits: When life gives
us lemons, it's fortunate for us that we are able to make lemonade.

Another cliche: just my $0.02.

- Mike

Steve Gonedes

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Martin Rodgers wrote:
>

>
> The truth is morely that the programmer must take responsibility for
> handling errors, where ever they may occur. A "disk full" error can
> crash a program if it can't recover, which leads us neatly to the
> Halting Problem. So this isn't just a language issue. ;)


When this happened to me under Unix, the program kept running and
happily began to truncate all the files I tried to save (sans the error
message). What a pleasure.

I think the problem here is when writing programs that are trying to
work with the of Unix programs is the lack of a consistant, well defined
way for doing something as simple as getting the free space on a
partition (not all Unix programs have statfs I believe). How sad. The
kernel saves like %5 of the disk for itself too, sure would have been
nice if it was smart enough to share or warn me. (I didn't get an error
message for like a half hour; it was GNU mv that finally said hey bozo -
you have no space left.)

Frank A. Adrian

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

Steve Gonedes wrote in message <6ec6os$7...@bgtnsc03.worldnet.att.net>...

>When this happened to me under Unix, the program kept running and
>happily began to truncate all the files I tried to save (sans the error
>message). What a pleasure.
>
>I think the problem here is when writing programs that are trying to
>work with the of Unix programs is the lack of a consistant, well defined
>way for doing something as simple as getting the free space on a
>partition (not all Unix programs have statfs I believe). How sad. The
>kernel saves like %5 of the disk for itself too, sure would have been
>nice if it was smart enough to share or warn me. (I didn't get an error
>message for like a half hour; it was GNU mv that finally said hey bozo -
>you have no space left.)

Welcome to the world of "Worse Is Better". It's more palatable for an
operation to fail silently than to raise an unmistakeable error signal that
must be handled. It's better to allow the user to let an error code be
unchecked rather than to force him to pass a routine that would handle an
error. It's better to rely on a falible programmer's judgement than to use
facilities to force him to at least acknowledge the possibility of the error
(if only for an explicit statement to ignore the error). The scary thing is
that even Visual Basic on Windows has this type of thing more correct than C
libraries on UNIX systems.

The only thing necessary for bad systems to thrive is that you keep buying
into their badness.
--
Frank A. Adrian
First DataBank
frank_...@firstdatabank.com (W)
fra...@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.


wan...@exploited.barmy.army

unread,
Mar 13, 1998, 3:00:00 AM3/13/98
to

In article <hj0O.22$WL.6...@typhoon.texas.net>,

Dan Higdon <hd...@charybdis.com> wrote:
>wan...@exploited.barmy.army wrote in message
><6e9h3f$4t9$1...@Masala.CC.UH.EDU>...
>>I like quite a few languages, pretty much functional languages
>>although this little chat has convinced me to give Forth another
>>chance, if for nothing else than fun (I've already downloaded like
>>a billion interpretors :)).
>
><GRIN!> Like I said, I think the world may have passed Forth by.
>Just my opinion, of course - chances are it's just me that's passed
>Forth by. I've become accustomed to named variables, parameter
>lists and strong type-checking. :-)

I don't really know what the status of Forth is, but then
popularity was never a factor in my decision to program :).


>
>>I like Scheme's elegance and consistency but as you said
>>Common Lisp is the most capable of "real world" development.
>>Some people like to brag about Scheme's small size, what they
>>neglect to mention however is that the size is due to a
>>serious lack of functionality.
>
>That's one reason I don't mess around with Scheme as much anymore.
>I still read the newsgroup though, because I love Scheme's concept and
>the sorts of discussions that arise from contemplating Scheme.


Same here. I'm sitting back, hoping that I'll be here when
(or if) the day arrives that the next standard (R6RS?) adds
just enough functionality to make migration back to Scheme
wortwhile.

I can still hope can't I?

[Snip]

>
>Yep. I started out with some nameless Lisp interpreter for my
>Apple ][ in the mid-80s, and then "discovered" Common Lisp
>at UTexas. (I also did my share of Icon, Prolog, and Smalltalk,
>but they were all to "special purpose" to be generally useful, IMHO.)

I never used Icon (I read about it), but I did use Prolog and
Smalltalk. Prolog is much more flexible than it seems, but it
may still be too narrow in scope, but Smalltalk struck me as
sufficiently general purpose.

The thing that got to me about Smalltalk (both a blessing
and a curse), is that the enviornment was considered part of
the language (I'm talking about GUIs, browsers, etc....).
This was nice in that Smalltalk probably has the nicest
development environment around, but bad in that systems
that didn't have GUIs were more or less SOL. I mean I had a
CLI version of Smalltalk, and a Smalltalk book and
I was trying to define a class, and the book had instructions
like "click on so and so". Kind of tough when you're
sitting in DOS :)


>
>I gotta admit, though, that the Haskell 'where' syntax a lot. Much
>nicer than SML's let...in...end (which works almost identically
>to Lisp's (let...) form, for those who don't program in SML or Haskell.)

I admit, I like Haskell's "where" syntax more than (let...).
It's nice to be able to defer component defnitions until
after the main logic.

[Snip -- Promises vs. Lazy Evaluation]

Thanks for the info.


>
>You and me both. I catch a decent level of friendly hostility from
>my coworkers for being a "language hound". They don't understand
>what a conceptual advantage it is to understand all the
>paradigms embodied by different language designs (functional,
>lazy, logic/Horn clause, etc.)

It's a tremendous advantage. You learn new and exciting
ways of doing things, you understand new languages which may
be the norm a decade from now, and you may find a gem that
could make life a bit easier for you.

Where I work, I don't get any such luxury. I'm basically
stuck trying to debug/enhance/modify horribly written
C, C++, and Fortran code by applying equally gruesome hacks.
It's one ugly hack after another. Forget prototyping,
it's usually a brainless hack job, obfusicated by
idiotic dependencies that shouldn't be there!


>
>>Lisp can do numeric and data processing perfectly well.
>>In particular using mappings, filters, and reduction functions (they
>>are generalized to work over arbitrary sequences) can make such tasks
>>for better suited for Lisp than most other languages.
>
>C's Fortran-like expression syntax is convenient. Of course, one
>of the first macros I wrote was and infix macro, that worked something
>like: (infix 3 + 2 * x) => (+ 3 (* 2 x)). Try THAT in C. :-)

Right, it is pretty convenient, especially for complex
formulas, but after a while you get used to the
Lisp way :)


>
>>Well the pointer operations -- are they for accessing devices
>>as memory (like screen buffers) or data (array accesses)?
>>I can't see a problem with Lisp using its own data structures to
>>handle this, rather than playing with pointers.
>
>No, just memory/array access. And there's nothing magic about
>GIF decoders, I was just saying that C's syntax is more streamlined
>for memory access and linear processing than Lisp. You may
>only agree that it's more terse and confusing, however. :-)

Well which is more streamlined would depend again on whether or not
the algorithm is conducive to maps, reductions, and filters.
If so, then the higher order sequence operations will make
Lisp much more streamlined than C's syntax (since we wouldn't
have to get into array access much, or even at all).


>
>>Pretty much :). The fact that we have introspective
>>capabilities, closures, a very consistent function-driven
>>syntax, and a powerful macro system means that extending
>>the language is very simple.
>
>Lisp's greatest strength, if you ask me. Otherwise, I see Lisp
>as just another non-pure functional language without static
>type checking. :-)
>(let's see how many black eyes I get for THAT comment!)


Hahahahahah. Well it's Common Lisp's distinguishing characteristic
to have introspection and a powerful macro system, take that away
and you've taken away quite a bit :). I wouldn't say that what's
left is just another non-pure functional language -- you've still got
quite a bit of horsepower under that hood :).

>
>>While I'll be the first to state that Java is overrated,
>>I will also be the first to state that it is better than
>>C/C++. It does have some decent features (like all
>>objects being references), but also some idiotic
>>shortcomings (primitive data types are not objects).
>
>Yes, it is a better language. Once the implementations
>catch up, I hope it gets used more.

Oh yes, the dreaded implementations :). The last time I
used Java, I was running JDK 1.0.1 (or something like that).
Within a week I had deleted it from my hard drive, and swore
that I wouldn't touch Java again until someone wrote an
implementation that actually worked for a change. Since then,
I've found vastly superior languages, and I doubt I'll be so much
as thinking about Java now :).


>
>>Overall, I think that even if it is adapted, it will be a
>>step in the right direction (at least it has garbage
>>collection).
>
>Maybe Java will finally be the language that teaches the
>world that automatic memory management is a GOOD thing.
>Best of all, Apple can't kill it like they more-or-less did Dylan. :-/

Dylan was what, basically a sugared-syntax version of Lisp? I always
wondered what the point of having a Lisp with a "traditional" syntax
would be, when much of the power comes from the non-traditional
syntax. I mean try treading code as data and vice versa with
a non-lisp-like syntax.


>
>Then, they'll come to see the beauty of the classical GC'ed
>language - Lisp! (Ok, that was really cheezy, but the sentiment
>holds true.)


Man, I'd platform dive in a pool of melted Limburger just to see
Lisp get widely used.


>
>----------------------------------------
>hd...@charybdis.com
>There's no one left to finger
>No one here to blame
>
>
>


---
Regards,
Ahmed

My email address has been altered to avoid spam. To email me,

send email to punkrock at cs dot uh dot edu.


Martin Rodgers

unread,
Mar 14, 1998, 3:00:00 AM3/14/98
to

Steve Gonedes wheezed these wise words:

> When this happened to me under Unix, the program kept running and
> happily began to truncate all the files I tried to save (sans the error
> message). What a pleasure.

Yeah, instead of "crash" we should really say, "unspecified behavior".
That's even more scary. I'd like to not only minimize the amount of
damage software can do, but have some idea what the limit on that
amount will be. The painfully reality is that there is no limit.



> I think the problem here is when writing programs that are trying to
> work with the of Unix programs is the lack of a consistant, well defined
> way for doing something as simple as getting the free space on a
> partition (not all Unix programs have statfs I believe). How sad. The
> kernel saves like %5 of the disk for itself too, sure would have been
> nice if it was smart enough to share or warn me. (I didn't get an error
> message for like a half hour; it was GNU mv that finally said hey bozo -
> you have no space left.)
>

Would it be better for the performance to gracefully degrade? I'd
still like some warning and means for software to catch it. Windows
has an event called something like WM_COMPACTING, but this is a very
crude way of warning an app that it needs to reduce memory resources.
With the demand page VM available today, it might not even be useful.

Unix has had true VM for a lot longer, but is there a signal for a
"low memory" event? If so, how useful is it? Is there a similar signal
for "low disk space"? How should a language handle such things? Even
when an event is outside the scope of the language, there should be a
way of setting an exception handler for it. For example, a conditon
class in a Common Lisp system. This will be system dependant, unless
the language already defines such a condition.

Martin Rodgers

unread,
Mar 14, 1998, 3:00:00 AM3/14/98
to

Frank A. Adrian wheezed these wise words:

> The scary thing is that even Visual Basic on Windows has this
> type of thing more correct than C libraries on UNIX systems.

Contrary to the propadanda spread by over zealous C++ programmers, VB
does a lot things better. Perhaps it even gets some of them "right",
with the result that programming in VB can be fun. (This is a thread
about how programming can be fun?) Unfortunately, no tool can ever
ensure that it will be used correctly. Responsible use is not a
feature that you can code into software. It's a wetware issue.

Programming tools that are easy to use help experienced programmers,
but they also help the inexperienced. The problem is that not everyone
can appreciate the difference. Hence "Worse Is Better".

This is why [boring rant mode] I feel we need better education.


--
Please note: my email address is munged; You can never browse enough

Ken Deboy

unread,
Mar 14, 1998, 3:00:00 AM3/14/98
to Martin Rodgers

Martin Rodgers wrote:

-- snip --

> I'd say that K&R's C tutorial does the same thing, which is why some C

> programmers claim that book is "too hard". No it isn't! If you can't


> understand that book, then you should touch C! (However, see below.)

> That book assumes you already know how to program. _All_ the other C
> books I've seen don't teach you anything about programming, never mind
> how to program in C.

As someone just learning to program (in C), I'm not sure what you're
try-
ing to say here. It seems like one already needs to know how to program
to understand the book (?) but if I can't learn programming from a book
or a class then I'm not supposed to program (or try to)? I haven't seen
the K&R book, but the books I've found most useful to teach me program-
ming are Thinking Forth (but I don't use Forth yet) and "Teach Yourself
Advanced C." I agree that most begining C books teach squat about actual
programming. As for books, is Winston and Horn Lisp a good one to learn
programming in Lisp?

> A lot of people will suggest other alternatives to C. Today, the media
> and certain vendors are going mad about the current favourite
> alternative, Java.

I looked at learning Java, but I didn't like that everything in it has
to
be "object oriented." Besides if it was so good there wouldn't be so
much
hype about it (look at Win95;) ). I doubt I'll ever use it except to add
a GUI to programs I write in other languages. Or maybe I'll use Tk
instead
since the Forth I want to try has a Tk interface but afaik no Java int.
Well since I drifted off topic anyway maybe someone doesn't mind
answering
a simple question: what is a good book for learning programming? I have
Winston and Horn on order, is Little Schemer better. Also where is the
FAQ so I can find scheme for my platform? Thanks for any help.

With best wishes,
Ken Deboy
glockr@locked_and_loaded.reno.nv.us
(please cc to my email because my newsreader is defective)

Christopher B. Browne

unread,
Mar 15, 1998, 3:00:00 AM3/15/98
to

On Sat, 14 Mar 1998 21:18:25 -0800, Ken Deboy <glockr@locked_and_loaded.reno.nv.us> posted:

>Martin Rodgers wrote:
>
> -- snip --
>
>> I'd say that K&R's C tutorial does the same thing, which is why some C
>> programmers claim that book is "too hard". No it isn't! If you can't
>> understand that book, then you should touch C! (However, see below.)
>> That book assumes you already know how to program. _All_ the other C
>> books I've seen don't teach you anything about programming, never mind
>> how to program in C.
>
>As someone just learning to program (in C), I'm not sure what you're
>try-
>ing to say here. It seems like one already needs to know how to program
>to understand the book (?) but if I can't learn programming from a book
>or a class then I'm not supposed to program (or try to)?

In effect, K&R assumes that you have a programming background, and a general
understanding of "how computers work." In that context, it's a great
description of C.

What I think is being implied here is that:
a) C is not a good language to start with, and
b) *None* of the books on C do a good job of teaching you *how to program.*

They may be overstating the case somewhat; they've nonetheless got a point.
I haven't seen any really good books about C that don't require that you
already have a programming background.

>I haven't seen
>the K&R book, but the books I've found most useful to teach me program-
>ming are Thinking Forth (but I don't use Forth yet) and "Teach Yourself
>Advanced C." I agree that most begining C books teach squat about actual
>programming. As for books, is Winston and Horn Lisp a good one to learn
>programming in Lisp?

SICP (Structure and Interpretation of Computer Programs) seems to get the
nod as being a great book about teaching about programming. It comes from a
Scheme perspective. It gets pretty mathematical in places; those not so
inclined will not appreciate this.

Thinking FORTH is indeed a good book; it particularly tries to grapple with
how one should factor code into pieces, and explains the importance of that
very well.

Winston & Horn is a pretty good book about learning LISP; it is somewhat
oriented towards Artificial Intelligence applications whereas SICP really
looks at the issue of understanding algorithms.

>> A lot of people will suggest other alternatives to C. Today, the media
>> and certain vendors are going mad about the current favourite
>> alternative, Java.
>
>I looked at learning Java, but I didn't like that everything in it has
>to
>be "object oriented." Besides if it was so good there wouldn't be so
>much
>hype about it (look at Win95;) ). I doubt I'll ever use it except to add
>a GUI to programs I write in other languages. Or maybe I'll use Tk
>instead
>since the Forth I want to try has a Tk interface but afaik no Java int.
>Well since I drifted off topic anyway maybe someone doesn't mind
>answering
>a simple question: what is a good book for learning programming? I have
>Winston and Horn on order, is Little Schemer better. Also where is the
>FAQ so I can find scheme for my platform? Thanks for any help.

What was your platform? There are Scheme implementations for virtually any
platform that supports Tk...

I've got a number of links to Scheme implementations and various information
resources at <http://www.hex.net/~cbbrowne/languages.html>

--
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
cbbr...@hex.net - "What have you contributed to Linux today?..."

Pierre Mai

unread,
Mar 15, 1998, 3:00:00 AM3/15/98
to

>>>>> "MR" == Martin Rodgers <m...@this.email.address.intentionally.left.crap.wildcard.demon.co.uk> writes:

MR> Frank A. Adrian wheezed these wise words:


>> The scary thing is that even Visual Basic on Windows has this
>> type of thing more correct than C libraries on UNIX systems.

MR> Contrary to the propadanda spread by over zealous C++
MR> programmers, VB does a lot things better. Perhaps it even gets
MR> some of them "right", with the result that programming in VB
MR> can be fun. (This is a thread about how programming can be
MR> fun?) Unfortunately, no tool can ever ensure that it will be
MR> used correctly. Responsible use is not a feature that you can
MR> code into software. It's a wetware issue.

<RANT mode"silly">
Well, I've *seriously* programmed in well over 15 different languages,
and diddled with many, many more, but VB (and it's many offspring) has
been the worst experience _ever_. Even handwriting sendmail.cf rules
seems more enjoyable in retrospect, and certainly using Applesoft-BASIC
(remember -- when Microsoft was much much smaller than today ;) was
much more fun, than this.

Amongst the many misgivings I have, the paramount probably is, that VB
is totally unspecified. Often you end up writing small scripts
just to try to figure out, what the real return type of some function
is (which is _defined_ nowhere in the manual, i.e. it just says
"returns a number", where you are left guessing what type that should
be), etc.

And noone can tell me, that this makes it easier for newbies to work.
It just leads to fragile, brittle code to say the least.

If you must draw pretty GUIs under Windows, Delphi is a much more
"mature" language/environment (which in comparison to real tools makes
it still brittle and sometimes painful, but at least it can be counted
as a programming language).
</RANT>

But then again, I'm probably a funny kind of person anyway, being
neither a member of the "C++ is the greatest/best/fastest" nor the
"C++ is rubbish/dangerous/evil" camps, nor hailing/condemming Java,
often using Common Lisp or Scheme for prototyping or "even"
implementation, but not shunning C++ if that seems appropriate (where
appropriate includes many non-technical factors as well), using
scripting languages without overhyping them (cf. the Scripting Paper
by Osterhout), etc.

IMHO there is programming/software engineering and there is
languages ;-)

To me this is like respect and clothes: If you have gained the first,
the second doesn't really matter, but if you haven't, no clothes will
change that. But this doesn't mean, that a good, classic suit might
not be more enjoyable to wear than a tight S&M outfit ;)

MR> Programming tools that are easy to use help experienced
MR> programmers, but they also help the inexperienced. The problem
MR> is that not everyone can appreciate the difference. Hence
MR> "Worse Is Better".

Well, easy and underdefined or similiar fuzzyness (simple glossing
over of important differences/details) are IMHO different things. In
my experience (which includes some teaching), when trying to teach
"newbies", always bear in mind what Einstein said:

Make it as simple as possible, but no simpler.

I have often seen teachers trying to gloss over facets which they
thought are too complex for the pupils, with the result, that the pupils
then were really lost. You IMHO don't make this easier by leaving out
important detail, but by structuring your presentation of them well.

BTW: When you refer to "Worse is Better", this refers back to an essay
by Richard P. Gabriel (=> Lisp, Common Lisp, Lucid, ...). And IMHO
Gabriel interprets this state of affairs quite a bit differently than
you do (i.e. less gloomy/biased).

Just my 2 centi-euros...

Regs, Pierre.

--
Pierre Mai <de...@cs.tu-berlin.de> http://home.pages.de/~trillian/
"Such is life." -- Fiona in "Four Weddings and a Funeral" (UK/1994)

Martin Rodgers

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

Pierre Mai wheezed these wise words:

> <RANT mode"silly">
> Well, I've *seriously* programmed in well over 15 different languages,
> and diddled with many, many more, but VB (and it's many offspring) has
> been the worst experience _ever_. Even handwriting sendmail.cf rules
> seems more enjoyable in retrospect, and certainly using Applesoft-BASIC
> (remember -- when Microsoft was much much smaller than today ;) was
> much more fun, than this.

Are you talking only about the language? Last time I checked, VB was a
development system. Mind you, if I had to write 400 forms, I'd
definitely consider using something that could help me write code that
will write the forms and their code. I wouldn't choose VB.



> Amongst the many misgivings I have, the paramount probably is, that VB
> is totally unspecified. Often you end up writing small scripts
> just to try to figure out, what the real return type of some function
> is (which is _defined_ nowhere in the manual, i.e. it just says
> "returns a number", where you are left guessing what type that should
> be), etc.

This is a language issue, I think. I've never seen any document for
ANSI Basic, so I can't even comment on _that_, never mind VB, VB's
conformance to any standard (whoever defines it), etc. If MS don't
provide a spec for the VB syntax and semantics, they'd only be
following a long tradition. The fact that it sucks farts from dead
cats won't stop people from continuing this foul habit.



> And noone can tell me, that this makes it easier for newbies to work.
> It just leads to fragile, brittle code to say the least.

Did I mention newbies? [checks thread subject] Even some experienced
programmers seem to like VB. I'm not one of them. I can only note that
these people tend not to have discovered Lisp.



> If you must draw pretty GUIs under Windows, Delphi is a much more
> "mature" language/environment (which in comparison to real tools makes
> it still brittle and sometimes painful, but at least it can be counted
> as a programming language).
> </RANT>

Agreed. I'd love a Scheme or CL system that had an IDE like Delphi's.
I'd even like a C++ system like that. Not much, but for those times
when C++ is a necessary evil, that's how I'd like to use it.



> But then again, I'm probably a funny kind of person anyway, being
> neither a member of the "C++ is the greatest/best/fastest" nor the
> "C++ is rubbish/dangerous/evil" camps, nor hailing/condemming Java,
> often using Common Lisp or Scheme for prototyping or "even"
> implementation, but not shunning C++ if that seems appropriate (where
> appropriate includes many non-technical factors as well), using
> scripting languages without overhyping them (cf. the Scripting Paper
> by Osterhout), etc.

Hey, we're both heretics. ;) I've been flamed for expressing such
opinions. Frequently. I think it was last that I got flamed in
comp.lang.lisp for refering to "religious fanatics" - I was thinking
of C++ programmers, so go figure.



> IMHO there is programming/software engineering and there is
> languages ;-)

Most certainly. I learn new tricks in one language and apply them in
anything else I use. One of the tricks that MS get right is to make
heavily used activities simple to do, like creating forms. Perhaps
this isn't a big deal for everyone, but I know that there _are_ people
who care a great deal about such things - like users.

Anything that makes it easier for a programmer to deliver what a user
demands is a good thing. That's what some of us get paid for. ;)



> To me this is like respect and clothes: If you have gained the first,
> the second doesn't really matter, but if you haven't, no clothes will
> change that. But this doesn't mean, that a good, classic suit might
> not be more enjoyable to wear than a tight S&M outfit ;)

Exactly. This is why I like to point out that VB isn't just for
newbies, nor is VB just Basic. There's also "Ruby". This is the bit
that Borland reverse engineered for Delphi.



> Well, easy and underdefined or similiar fuzzyness (simple glossing
> over of important differences/details) are IMHO different things. In
> my experience (which includes some teaching), when trying to teach
> "newbies", always bear in mind what Einstein said:
>
> Make it as simple as possible, but no simpler.

This is why I like "The Little Schemer" book so much. Even people who
don't use Scheme (or any other Lisp) will recommend it.



> I have often seen teachers trying to gloss over facets which they
> thought are too complex for the pupils, with the result, that the pupils
> then were really lost. You IMHO don't make this easier by leaving out
> important detail, but by structuring your presentation of them well.

Why am I thinking of "The Peaceman"? ;) His failing appeared to be his
(lack of) understanding of recursion. I asked him repeatedly if this
could've been due to poor teaching, but he insisted on attacking
recursion.



> BTW: When you refer to "Worse is Better", this refers back to an essay
> by Richard P. Gabriel (=> Lisp, Common Lisp, Lucid, ...). And IMHO
> Gabriel interprets this state of affairs quite a bit differently than
> you do (i.e. less gloomy/biased).

My understanding of the paper is that he was being critical of Lisp in
some places, and praising it in others. Nothing is so perfect that it
can't be improved. However, there may be some debate over the reasons
for the "Worse is Better" situation. Is it due to stupidity or just
ignorance? I hope the latter is the reason, so we can do something to
improve matters. Ignorance implies a failing of education.

It's not all bad. There are many smart programmers who've simply not
yet discovered Lisp. All that they've heard so far has been bad; the
myths and propaganda. If all they know is available is C++ and VB,
because that's all they read about in the professional development
magazines, then we can't blame them for thinking this is all there is.

No doubt some vendors would like us to think that the choices are that
simple: one of their development tools or the other. Some of us know
of the alternatives, but that alone is not enough to change the world.

As I've often said before, I see this as a memetic battle. One meme
says that "C++ is fun." Another says, "VB is more fun than C++". We
also know a meme that says, "Lisp is more fun than anything else."
This meme is distinctly different from the apparently better known
memes, and this distinction is explained and (significantly) supported
is the additional meme, "There are no limits to how much fun Lisp can
be." This is an idea that programmers unfamiliar with Lisp will find
very hard to believe. They may mistake it for marketing/advocacy BS.

Is it suprising that there are programmers so cynical that they
disbelieve the claims we make about Lisp? Faith is not enough. If you
make bold claims, then others will justifiably demand hard evidence.
Is there a lack of such evidence? Does it require some level of faith
before people will even _look_ at the evidence?

I know I'm not the first to ask these questions. I know that Lisp is
fun, more fun than anything else I've used. Why do so few people
believe me? Why the blank looks? Do we blame marketing/advocacy BS?
Clannish behaviour and too much testosterone? MS for not appreciating
the value Lisp - or Gates' clannish preference for Basic? I don't
believe that Gates is stupid, but somehow the "Basic is fun" meme
seems to beat even the "C++ is fun" meme.

How healthy is the "Lisp is fun" meme? I don't mean in schools or
online. The meme may be very strong in schools, but what happens when
it enters the commercial world? Perhaps it really is as simple as form
creation, and VB does this better than almost anything else. Never
mind how awful Basic is for writing complex software, coz that's not
how it gets used. And there's the software component issue. If you can
drap 'n' drop a VBX/OCX spreadsheet tool into your form, you'll make a
lot of business users and developers happy. This is also "fun".

So called "AI" techniques are now finding their way into business
apps, but many are still hostile to this AI meme. I like writing code
that writes code, which could be called compiler writing. If I call it
that, people think I must be doing something incredibly difficult. Yet
I've seen a VB programmer doing pretty much the same kind of thing. He
just didn't call it a compiler. I might do it in Lisp, and do it
better, but the principles are the same. I've just had more practice.

I could ramble like this all day. ;)


--
Please note: my email address is munged; You can never browse enough

Frank A. Adrian

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

>Dylan was what, basically a sugared-syntax version of Lisp? I always
>wondered what the point of having a Lisp with a "traditional" syntax
>would be, when much of the power comes from the non-traditional
>syntax. I mean try treading code as data and vice versa with
>a non-lisp-like syntax.

Actually, the first implementation of Dylan had Lisp syntax (see Apple's
first DRM)! Then the brains behind it felt that by giving it a more C-like
syntax, it would gain acceptance more readily. Basically Dylan was (is? I
thought that Harlequin was still working on an implementation) a Lisp that
is OO from the ground up, using CLOS-like generic operations on all
primitives as well as "objects". It also had some enhancements for
improving performance. Also, no MOP (though I may be mistaken about this).
So, it was a conceptual child of Oaklisp, with an object system like CLOS,
and single lexically bound name space like Scheme. It probably would have
been a worthy Lisp branch had they not messed up the surface syntax with all
of that horrid C stuff.

Dan Higdon

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

wan...@exploited.barmy.army wrote in message
<6ebknr$cep$1...@Masala.CC.UH.EDU>...

>Same here. I'm sitting back, hoping that I'll be here when
>(or if) the day arrives that the next standard (R6RS?) adds
>just enough functionality to make migration back to Scheme
>wortwhile.
>
>I can still hope can't I?

Maybe! I get the distinct impression that the Scheme community has
turned away from making Scheme a "real world" development
system, and is using it mostly as a testbed for new and powerful
language ideas, like the macro system. (Which is VERY cool,
BTW.)

>I never used Icon (I read about it), but I did use Prolog and
>Smalltalk. Prolog is much more flexible than it seems, but it
>may still be too narrow in scope, but Smalltalk struck me as
>sufficiently general purpose.

I liked Icon a lot, because it gave me Prolog's backtracking with
a more "classic" syntax, and the ability to program procedurally
if I wanted to. (Gee, are we getting a theme here?) Prolog can
do some REALLY neat stuff, and I suspect the newer dialects would
really surprise me.

>The thing that got to me about Smalltalk (both a blessing
>and a curse), is that the enviornment was considered part of
>the language (I'm talking about GUIs, browsers, etc....).

Yep, nice environment, but difficult to deliver programs with, IMHO.

>I admit, I like Haskell's "where" syntax more than (let...).
>It's nice to be able to defer component defnitions until
>after the main logic.

Very nice, indeed! I just looked at the Haskell page, and grabbed
a copy of the HUGS interpreter. VERY cool stuff; I had no idea how
far that language had come since the ol' Gofer days. Thanks for
cluing me in on that one. It even has what looks like practical and
usable (for my purposes - Win32 deliverable development)
implementations.

>Where I work, I don't get any such luxury. I'm basically
>stuck trying to debug/enhance/modify horribly written
>C, C++, and Fortran code by applying equally gruesome hacks.

Darn, sorry to hear that. I've had jobs like that before - I spent
many a-year working on project management software written
in Fortran and xBASE. <shudder> That's about as far away from
programierenvergnuegen as you can get. :-) Actually, the guys I
was working with were cool enough to help ease the pain, but it's
not Lisp programming, that's for sure.

>>C's Fortran-like expression syntax is convenient. Of course, one
>>of the first macros I wrote was and infix macro, that worked something
>>like: (infix 3 + 2 * x) => (+ 3 (* 2 x)). Try THAT in C. :-)
>
>Right, it is pretty convenient, especially for complex
>formulas, but after a while you get used to the
>Lisp way :)

Yeah, in all honesty, I rarely used my infix macro. Of course, I use an
HP calculator, so non-arithmatic representations don't scare me. :-)

>Well which is more streamlined would depend again on whether or not
>the algorithm is conducive to maps, reductions, and filters.
>If so, then the higher order sequence operations will make
>Lisp much more streamlined than C's syntax (since we wouldn't
>have to get into array access much, or even at all).

I'll have to think over that one - a linear mapping from encoded data
to unencoded data, where both datasets are 2 dimensional, but just
may be treatable as a 1 dimensional stream if you squint right. Hmmm.

>>Lisp's greatest strength, if you ask me. Otherwise, I see Lisp
>>as just another non-pure functional language without static
>>type checking. :-)
>>(let's see how many black eyes I get for THAT comment!)
>
>Hahahahahah. Well it's Common Lisp's distinguishing characteristic
>to have introspection and a powerful macro system, take that away
>and you've taken away quite a bit :). I wouldn't say that what's
>left is just another non-pure functional language -- you've still got
>quite a bit of horsepower under that hood :).

Oh sure, I don't disagree with that at all. Macros and introspection are
very nice. I think the lack of macros (and any sort of conditional
compilation)
are serious flaws in SML. Introspection in a strongly-typed (dang, what's
the
correct term for languages who's variables have type restrictions, as
opposed
to Lisp's variables, which can refer to any type?) language is difficult at
best,
and really relies on the development environment. Fortunately for me, my
programming style rarely needs to query types, so I don't miss it that much.
It would be nice to know such facilities where there should I need them
however.

>Oh yes, the dreaded implementations :). The last time I
>used Java, I was running JDK 1.0.1 (or something like that).
>Within a week I had deleted it from my hard drive, and swore
>that I wouldn't touch Java again until someone wrote an
>implementation that actually worked for a change. Since then,
>I've found vastly superior languages, and I doubt I'll be so much
>as thinking about Java now :).

Ditto. Although the Scheme->JavaVM compiler is an interesting concept.

>Dylan was what, basically a sugared-syntax version of Lisp? I always

Not quite, but almost. I can best explain it with an analogy:
"Dylan is to CLOS as Scheme is to Common Lisp (without CLOS)"
Dylan uses multimethods, and is a fully object oriented language. The
interesting thing about Dylan is that since every object is based on
<Object>, you can use as much or as little type checking as you want.
You can even fully type specify a method, and expect the compiler to
generate maximally efficient code (no need to see if that value is
a number before you add something to it - the compiler can already
guarantee that it's a number). I know analysis tools can do this under
traditional Lisps, but in Dylan the program has the option of manually
specifying as tight or as loose a typecheck as desired.

>wondered what the point of having a Lisp with a "traditional" syntax
>would be, when much of the power comes from the non-traditional
>syntax. I mean try treading code as data and vice versa with
>a non-lisp-like syntax.

I don't think Dylan can do the code/data thing with the infix syntax,
although I think Dylan retains the old-fasioned Lisp syntax for just such
an occasion.

Dan Higdon

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

Michael Hobbs wrote in message <35095C7B...@ccmail.fingerhut.com>...

>Yes, but how much do you get paid because you are able to understand the
>cruddy, cryptic language? :-) I sometimes enjoy thinking about the
>analogy of computer programmers as priests (in the ancient sense). We
>are highly respected (and well paid) because we are able to understand
>the mystifying oracles and omens that the average person is unable to
>comprehend.

A point well taken. I get the satisfaction of a decent payrate, steady
work,
and my product on the shelf at the local CompUSA at the end of the day.
That's worth something, and any programming is more fun than not
programming. Of course, my life would be a little nicer if I didn't always
have to wonder when the undetected memory leak/trasher will rear its
ugly head. That's not as much fun.

Still, knowing C/C++ has helped to feed me and fund my various hobbies,
so I shouldn't bite the hand that feeds me, even if the hand is just as
likely
to strike me when my back is turned. :-)

Tony Finch

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

"Dan Higdon" <hd...@charybdis.com> wrote:
>
>Oh sure, I don't disagree with that at all. Macros and introspection
>are very nice. I think the lack of macros (and any sort of
>conditional compilation) are serious flaws in SML. Introspection in a
>strongly-typed (dang, what's the correct term for languages who's
>variables have type restrictions, as opposed to Lisp's variables,
>which can refer to any type?) language is difficult at best, and
>really relies on the development environment. Fortunately for me, my
>programming style rarely needs to query types, so I don't miss it
>that much. It would be nice to know such facilities where there
>should I need them however.

The CAML people from INRIA in France have done some fairly cool work
on macro processing for their version of ML. Given that ML's syntax is
somewhat more complicated than lisp's, this is quite an achievement.

Tony.

Mike Williams

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

In article <HrpN.415$SA3.3...@typhoon.texas.net>, "Dan Higdon" <hd...@charybdis.com> writes:

|> Amen. I eagerly await the day that C/C++ can be dethroned. Already,
|> people in the industry are starting to come around to realizing how
|> crappy the C/C++ family of languages is for large-scale development
|> of sophisticated apps. Now, if our "fun" languages could only replace
|> C in the real world, programming for a living could become fun again.

I posted the following article a few weeks back in
comp.lang.functional. Maybe it is also of interest to Lisp and Scheme
enthusiasts as well. For those who don't know, Erlang is a
dynamically typed functional language which additions for concurrent
and distributed programing.

/Mike

PS. The figures below are wrong, should be:

- 375 000 lines of Erlang
- 250 000 lines of C
- 3 600 lines of Java


-------------forwarded article--------------------


I enclose, at the the end of this article, a press release from
Ericsson about a new high capacity ATM Switch. This system is
controled by a large number of processors. The central processors are
Sparc processors runing Solaris 2. The main part of the software is
written using Erlang. In terms of code volume, the system contains
290 000 lines of Erlang code 250 000 lines of C and 2500 lines of Java
(all not counting comments). The Erlang code would propbaly have been
well over one million lines of C code.

The most widely sold Functional Programming based system is the
Mobility Server. This system contains 250 000 lines of Erlang
code. The mobility server is an adjunct to Ericsson's MD110 PABX
(private telephone exchange). So far 270 Mobility servers have been
in countries throughout the world. They are selling at a rate of about
30 systems a month. The Mobility Server uses Erlang running of Force
processors with the VxWorks operating System.

It has been clearly recognised that using Erlang leads to a very fast
time to market and vastly reduces the development effort. Experience
from the mobility server shows that the software is of very high
quality, very few bugs have been reported from the field.

/Mike Williams

PS. There was a large discussion in these newsgoups a few weeks back
about if functional programming reduced complexity and was easier or
not. Would any of the people who didn't believe in functional
programming like to comment in the light of the above information?

---PRESS RELEASE---

ERICSSON ANNOUNCES HIGH PERFORMANCE ATM SWITCHING SYSTEM

Ericsson have today announced the AXD 301 ATM switch, a
high-performance, scaleable ATM switching system for both backbone
networks and edge applications. Its carrier-class design provides
the capacity, scalability, availability, and end-to-end
manageability needed to efficiently handle real-time and business
critical traffic in networks. A unique load sharing switching
concept makes the AXD 301 very compact and cost efficient. Near
linear scalability from 10Gbit/s to 160Gbit/s and a compact size
also makes it cost effective in small configurations down to about
5Gbit/s, making it suitable both for the edge and the core of a
network.

The AXD 301 is a key building block in multi-service ATM networks.
It can handle all currently envisaged broad band services, including
IP routing, high-speed data communications and other business
communications services, and residential services such as high speed
Internet access and interactive TV. Applications include ATM
connectivity networks; scaleable frame relay/ATM networks; and
Multi-Protocol Label Switching (MPLS) for efficient handling of IP
traffic. AXD 301 can be used in business access and residential
broad band access networks and can be combined with Ericsson's AXE
switching system to provide a full range of narrow band services.
All of these applications can run simultaneously on the same switch.
The AXD 301 is intended for public network operators and Internet
service providers, as the foundation for the long-term evolution of
Internet, data and telecommunications services.

The AXD 301 is equipped with functions such as large buffers,
per-connection queueing, multiple service classes, packet discard
that allow mixing different traffic types while still preserving
quality and efficiently using bandwidth. The system has a
performance, scalability and reliability that makes it suitable for
large networks. Its carrier-class software structure makes it
possible to include new functionality without disturbing traffic.

Additionally, the AXD 301 supports both cross connect and switching
applications. Complete support of ATM signaling protocols, and full
inter-networking between all protocols allows an operator to build a
network that flexibly combines different signaling protocols.
Plug-and-play network domains, separated by inter-carrier networking
protocols can easily be created. Also, as a complement to permanent
connections, soft permanent connections are supported, which
automates the routing of management controlled connections and
therefore reduces the network administration workload.

"In developing the AXD 301 we have used Ericsson's core expertise in
the area of switched technology, network building and real-time
services. We know that deployment of ATM networks is of increasing
strategic importance for the many operators who are now looking for
more advanced switches that can handle all types of traffic in
various combinations of networks. The AXD 301 significantly
strengthens Ericsson's strategic position as a supplier in the
datacom industry," said Anders Igel, Executive Vice President of
Ericsson and head of Infocom Systems.

Mike Mcdonald

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

In article <6ebknr$cep$1...@masala.cc.uh.edu>,
wan...@exploited.barmy.army writes:

> Where I work, I don't get any such luxury. I'm basically
> stuck trying to debug/enhance/modify horribly written
> C, C++, and Fortran code by applying equally gruesome hacks.
> It's one ugly hack after another. Forget prototyping,
> it's usually a brainless hack job, obfusicated by
> idiotic dependencies that shouldn't be there!

Boy! This sure sounds like a description of my jobs
over the last 5 years or so! What's really depressing
is that my current company USED to be full of
lisp/scheme hackers. Unfortunately, someone let the
marketing yahoos into the picture and now it's C++ on
NT. Needless to say, I don't find programming "fun"
anymore. It's dull, tedious, and monotonous.

Isn't there any company left out there that's
interested in doing things "right"?

Mike McDonald
mik...@mikemac.com


Wolfgang von Hansen

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

Moin, Moin!

Espen Vestre <e...@nextel.no> schreibt über `Re: "Programming is FUN again"
rambling commentary':

> b...@wetware.com (Bill Coderre) writes:
>
> > Perhaps if C programs did all of the error-checking and memory-management
> > they were supposed to, they would also be slower. (I'm sure there's cases
> > where Lisp is doing TOO MUCH error checking, and that's resulting in
> > unnecessary speed loss, but hey.)
>
> It's not generally the case that C programs are _fast_. Quite in
> opposite: The popular programs these days typically are _suprisingly_
> _fat_ and _slow_. Some reasons for this might be:
>
> 1) a lot of code is programmed by really bad C newbies
> 2) too much has to be done in too short time
> 3) too many software companies hold hardware shares
>
> But, if you think of it, (1) is partly a consequence of (2).
> And if you think more of it, both (1) and (2) may partly be
> caused by the choice of language. In Common Lisp fewer people
> could do more in shorter time!

I disagree with that. (1) and (2) are not caused by the choice of language
but by the choice of algorithms. You don't expect a newbie to produce fast
code in any given language. Just think of the seven versions of COPY-LIST
in the FAQ. IMHO (3) only holds true for Micro$soft as they seem to be in
liaison with Intel. That's not all bad because we have seen a massive
increase in processing speed during the past years. OTOH we are forced to
run Windoze on such systems most of the time. :-( Anyway, I believe Windoze
would be a neat OS if they would develop it on a 386sx-16 rather than on
state-of-the-art computers. I add (4) and (5) to your list:

4) software development on computers faster than those the program is going
to be executed on in real life
5) compilers that heavily rely on libraries instead of creating the code
themselves

As for (5), I just wrote two "Hello World!"-programs -- one in C and one in
C++ and compiled both with the GNU-C-Compiler. Their size was the same (17k)
but that's dynamic linking on run-time. The C version took 0.3s to execute
(my computer isn't too fast anyway). The C++ version ran for full 1.6s! I'm
not sure if I really want to know what the computer did during the excess
1.3s. Supposedly a lot more libraries had to be opened. (I even had to add
termcap manually because curses (which had been added for some strange
reason) needed the function tgetstr; don't ask me why an output-only program
reads characters from the keyboard.)

To get back to Lisp, I believe speed solely depends on the algorithms used
and the quality of the implementation of the interpreter/compiler. Unlike
other languages, Lisp has its own view on the memory which is organized in
cells that build up a binary tree but do not have any sequential order in
the first place (here, the word heap for free mem is even more appropriate).
An extra layer is needed to connect this external representation to the
underlying hardware structure and if this layer is implemented really bad
the whole system performance goes down.

OTOH, all programs run on the same processor architecture and can be defined
as an automaton that accepts certain inputs and answers with certain
outputs. A Lisp compiler may produce the same code as a C compiler for
a given task. The only reason why they differ is that the compiler doesn't
know what will happen during execution. E.g. in C, all memory is located on
compile-time (excluding malloc() and the like which simply are function
calls) and the compiler thus knows the location of every data item. A Lisp
compiler never knows how often CONS is invoked or if a literal is going to
be bound to some memory location or not.

However, this difference also influences the usability of a language for
a given task. Programs that need a highly dynamic interior are better
implemented in Lisp rather than C. But if the task only needs constant space
and simple operations on them, Lisp might also be too sluggish because it
expects the program to do more than it actually does.

> There will always be a need for programming at the level of C
> or lower.

As long as operating systems are written in C, it's best to access system
resources through the native tongue.


Gruß

Wolfgang
--
(_(__)_) Privat: w...@geodesy.inka.de //
~oo~ Uni: vha...@ipf.bau-verm.uni-karlsruhe.de \X/
(..)

Hartmann Schaffer

unread,
Mar 16, 1998, 3:00:00 AM3/16/98
to

Wolfgang von Hansen wrote:
> ...

> state-of-the-art computers. I add (4) and (5) to your list:
>
> 4) software development on computers faster than those the program is going
> to be executed on in real life
> 5) compilers that heavily rely on libraries instead of creating the code
> themselves

> ...
Add
6) Too many dewvelopers believe that their code will only be run on
machines with infinitesimal cycle time and infinite memory, and this
assumption turns out to be wrong in real life
--

Hartmann Schaffer
Guelph, Ontario, Canada
scha...@netcom.ca (hs)

Espen Vestre

unread,
Mar 17, 1998, 3:00:00 AM3/17/98
to

"Wolfgang von Hansen" <w...@geodesy.inka.de> writes:

[I wrote:]


> > It's not generally the case that C programs are _fast_. Quite in
> > opposite: The popular programs these days typically are _suprisingly_
> > _fat_ and _slow_. Some reasons for this might be:
> >
> > 1) a lot of code is programmed by really bad C newbies
> > 2) too much has to be done in too short time
> > 3) too many software companies hold hardware shares
> >
> > But, if you think of it, (1) is partly a consequence of (2).
> > And if you think more of it, both (1) and (2) may partly be
> > caused by the choice of language. In Common Lisp fewer people
> > could do more in shorter time!
>
> I disagree with that. (1) and (2) are not caused by the choice of language
> but by the choice of algorithms. You don't expect a newbie to produce fast
> code in any given language.

No, we agree. I think :-).

My point is that choice of language influences time to deliver
(acceptable) code. To me it's quite obvious - and I think it has
been scientifically documented - that in the general case (not such
special cases as highly optimized numeric code which is discussed in
another thread) development in CL is faster than in C/C++.

So, as the projects typically try to accomplish too much within
a too limited timeframe, a project using CL could fulfill the
goal with fewer people, _or_ with the same number of people
having more time to figure out better algorithms to use.

In such a setting, the efficiency of compilers is not a very big
issue, I think, since the project with its limits will end up
using very suboptimal algorithms anyway, one of the reasons being
(1) which again is caused by (2) because the project has to hire
a lot of very inexperienced people, using "raw manpower" to get
things done within the unrealistic time limits. With CL, you
could avoid hiring too many of the unexperienced, and you could
have more time finding the right algorithms.

--

regards,
Espen Vestre


f o x a t . n y u . e d u

unread,
Mar 17, 1998, 3:00:00 AM3/17/98
to

Espen Vestre <e...@nextel.no> writes:

> My point is that choice of language influences time to deliver
> (acceptable) code. To me it's quite obvious - and I think it has
> been scientifically documented - that in the general case (not such
> special cases as highly optimized numeric code which is discussed in
> another thread) development in CL is faster than in C/C++.

If this has been scientifically documented, I'd love a reference to
add to my bibliography...
--
David Fox http://www.cat.nyu.edu/fox xoF divaD
NYU Media Research Lab f...@cat.nyu.edu baL hcraeseR aideM UYN

Jonathan Guthrie

unread,
Mar 21, 1998, 3:00:00 AM3/21/98
to

In comp.lang.scheme Frank A. Adrian <frank_...@firstdatabank.com> wrote:
> Welcome to the world of "Worse Is Better". It's more palatable for an
> operation to fail silently than to raise an unmistakeable error signal that
> must be handled. It's better to allow the user to let an error code be
> unchecked rather than to force him to pass a routine that would handle an
> error. It's better to rely on a falible programmer's judgement than to use
> facilities to force him to at least acknowledge the possibility of the error
> (if only for an explicit statement to ignore the error). The scary thing is

> that even Visual Basic on Windows has this type of thing more correct than C
> libraries on UNIX systems.

> The only thing necessary for bad systems to thrive is that you keep buying
> into their badness.

While this is true, it is not necessarily true that forcing the programmer
to deal with possible error results is the best approach. My reaction
to such a language might be to program emacs to automatically write those
codes needed to ignore the errors. (Then again, I might be to scrap the
translator that required such a thing.)

You see, the most important thing that should happen when an error occurs
is that the program should continue to run. Report an error, maybe, give
bogus results, maybe, but if the program just exits with a cryptic error
message (or a core dump or, perhaps, a dialog box with a register dump)
then I cannot suggest to the user what might be wrong and I cannot easily
diagnose what happened.

That, and the fact that there often is no in-program way to deal with the
actual error (therefore many errors are best simply ignored) is the reason
why "never test for an error that you don't know how to deal with" is such
a common, and useful, attitude WRT error trapping. The expertise of the
programmer comes in at the point of deciding which errors can cause what
I used to call "erratic operation." For example, some errors must be
trapped or otherwise good data is lost. (A defensive posture WRT databases
is your best bet.)

In my opinion, the guys who did the C numeric extensions (I'm drawing a
blank on the actual name) got it right. When you get an error, propagate
that error, but keep the calculation going. When it's all over, you can
tell whether or not you got bogus results because of machine limits or
some such. (You can get bogus results at any time by writing the wrong
program, and no programming language is going to be able to help you
with that.) At that point, you can either simply display the results
(and be prepared to explain to people why their account balance shows as
"-#inf") or issue an error message saying that something bad happened.

I guess what I'm trying to say is that program exceptions aren't a panacea.
They are wonderful for handling those cases that must be handled (cleaning
up after an error in the middle of a nested conditional leaps to mind)
but they're not always needed. In my opinion, they are useful about as
often as they are inappropriate.

--
Jonathan Guthrie (jgut...@brokersys.com)
Information Broker Systems +281-895-8101 http://www.brokersys.com/
12703 Veterans Memorial #106, Houston, TX 77014, USA

We sell Internet access and commercial Web space. We also are general
network consultants in the greater Houston area.


R. Toy

unread,
Mar 21, 1998, 3:00:00 AM3/21/98
to

Jonathan Guthrie wrote:
>
> In my opinion, the guys who did the C numeric extensions (I'm drawing a
> blank on the actual name) got it right. When you get an error, propagate
> that error, but keep the calculation going. When it's all over, you can
> tell whether or not you got bogus results because of machine limits or
> some such. (You can get bogus results at any time by writing the wrong
> program, and no programming language is going to be able to help you
> with that.) At that point, you can either simply display the results
> (and be prepared to explain to people why their account balance shows as
> "-#inf") or issue an error message saying that something bad happened.

Yeah, I just really *LOVE* it when my simulation that's been running for
days finally finishes and says NaN for every answer. So, which one of
those billion or trillion flops caused the problem? Or worse yet,
something that isn't supposed to overflow does, corrupts other
computations, I take the reciprocal to get zero, and the results look ok
because they're just numbers, but they are totally bogus. (Ok, this
isn't too likely, but you never know.)

In this particular case, I want my computations to die when something
unexpected happens, like overflow, invalid ops, etc.


--
---------------------------------------------------------------------------
----> Raymond Toy rt...@mindspring.com
http://www.mindspring.com/~rtoy

R. Toy

unread,
Mar 21, 1998, 3:00:00 AM3/21/98
to

Christopher B. Browne wrote:
>
> Which clearly establishes that there are multiple valid sorts of behaviour
> for this.
>
> In an "embedded" application (of whatever sort), it is utterly inappropriate
> for the program to crash. If there is no mechanism to deal with recovery,
> then it makes little sense to allow a crash. Hence, "propagate error, and
> keep going."
>

Yes, I agree with multiple valid sorts of behavior.

Let's see, I don't know what to do about the error, so I'll just
continue and pull out the control rods, turn off the coolant, disconnect
the operator panel and quietly keep going until meltdown. :-)

However, I don't see much point of propagating the error. It seems to
me that a reset and restart could hardly do worse. But every
application should have to deal with this in some way, and the choices
that are made should be conscious decisions of the
designers/programmers, not some convenient default behavior that
everyone forgot about because it never happened during testing.

Ray

Christopher B. Browne

unread,
Mar 22, 1998, 3:00:00 AM3/22/98
to

On Sat, 21 Mar 1998 13:41:54 -0500, R. Toy <rt...@mindspring.com> posted:

>Jonathan Guthrie wrote:
>> In my opinion, the guys who did the C numeric extensions (I'm drawing a
>> blank on the actual name) got it right. When you get an error, propagate
>> that error, but keep the calculation going. When it's all over, you can
>> tell whether or not you got bogus results because of machine limits or
>> some such. (You can get bogus results at any time by writing the wrong
>> program, and no programming language is going to be able to help you
>> with that.) At that point, you can either simply display the results
>> (and be prepared to explain to people why their account balance shows as
>> "-#inf") or issue an error message saying that something bad happened.
>
>Yeah, I just really *LOVE* it when my simulation that's been running for
>days finally finishes and says NaN for every answer. So, which one of
>those billion or trillion flops caused the problem? Or worse yet,
>something that isn't supposed to overflow does, corrupts other
>computations, I take the reciprocal to get zero, and the results look ok
>because they're just numbers, but they are totally bogus. (Ok, this
>isn't too likely, but you never know.)
>
>In this particular case, I want my computations to die when something
>unexpected happens, like overflow, invalid ops, etc.

Which clearly establishes that there are multiple valid sorts of behaviour
for this.

In an "embedded" application (of whatever sort), it is utterly inappropriate
for the program to crash. If there is no mechanism to deal with recovery,
then it makes little sense to allow a crash. Hence, "propagate error, and
keep going."

There are also more "monitored" applications where it is quite appropriate
to have errors result in an explicit sort of "crash." Stop the batch job;
notify the system operator that there's a problem; page the programmer.

Obviously the behaviour under conditions of error needs to be controlled so
that one can select from these sorts of actions when an error occurs.

Christopher B. Browne

unread,
Mar 22, 1998, 3:00:00 AM3/22/98
to

On Sat, 21 Mar 1998 21:47:41 -0500, R. Toy <rt...@mindspring.com> posted:
>Christopher B. Browne wrote:
>>
>> Which clearly establishes that there are multiple valid sorts of behaviour
>> for this.
>>
>> In an "embedded" application (of whatever sort), it is utterly inappropriate
>> for the program to crash. If there is no mechanism to deal with recovery,
>> then it makes little sense to allow a crash. Hence, "propagate error, and
>> keep going."
>>
>
>Yes, I agree with multiple valid sorts of behavior.
>
>Let's see, I don't know what to do about the error, so I'll just
>continue and pull out the control rods, turn off the coolant, disconnect
>the operator panel and quietly keep going until meltdown. :-)

If the program doesn't know how to recover from the error, it may still be
unacceptable to 'do nothing.'

--> "I don't know what to do about the error, so I'll just ABEND/core
dump/..., leaving rods in place so that the reactor melts down..."

I *thought* that pulling out the control rods would cause reactions to stop,
which may be incorrect.

*My* reaction would be to dump the heavy water, which would stop reaction in
the pile at any of the reactors (CANDU) that I've visited, but I digress, as
nuclear chemistry isn't the question here.

In any case, for nuclear applications, there is presumably a need to
validate logic paths to a greater extent than is normal for less critical
systems. The "meltdown" failure is a rather more catastrophic (and costly,
by almost *any* metric, whether financial, legal, or radiological) event
than most software failures.

When the software I'm working on presently fails, someone may get annoyed
because their paycheque is a few dollars off. Powers that be are less
concerned about verifying correctness there than they are about (say)
aircraft autopilot control systems.

--
Just be thankful Microsoft isn't a manufacturer of pharmaceuticals.

Jonathan Guthrie

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

In comp.lang.scheme R. Toy <rt...@mindspring.com> wrote:
> > In my opinion, the guys who did the C numeric extensions (I'm drawing a
> > blank on the actual name) got it right. When you get an error, propagate
> > that error, but keep the calculation going.

> Yeah, I just really *LOVE* it when my simulation that's been running for


> days finally finishes and says NaN for every answer. So, which one of
> those billion or trillion flops caused the problem? Or worse yet,
> something that isn't supposed to overflow does, corrupts other
> computations, I take the reciprocal to get zero, and the results look ok
> because they're just numbers, but they are totally bogus. (Ok, this
> isn't too likely, but you never know.)

> In this particular case, I want my computations to die when something
> unexpected happens, like overflow, invalid ops, etc.

So tell it to die. That's an available option. However, don't be surprised
when the knowledge that the calculation went out of range HERE doesn't help
you very much to figure out what the real problem is. Numeric errors tend
to propagate quite a bit before they get far enough out of range to cause
the system to barf.

Note that the last time I did the "simulation that runs for days" thing,
it was in Modula-3 (which has exceptions) and the results were often
bogus even though the exceptions were never tripped. Finding the reasons
for bogus results is left as an exercise for the programmer. (I had a
boss once who said that all programs go through four phases: It crashes,
it doesn't crash, but it produces zero output, it produces nonzero
garbage, and it produces correct output.)

This is getting away from "programming is fun again" (an attitude which I
highly recommend---programming should be fun and programmers should always
be learning new languages <insert TAO OF PROGRAMMING quote here>) and
sounding more like work. The last time I had real joy from programming
was yesterday when I found the reversed-branch that I had been searching
for (even though I didn't know that I had been searching for it) for three
days.

Raymond Toy

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

cbbr...@news.brownes.org (Christopher B. Browne) writes:

>
> If the program doesn't know how to recover from the error, it may still be
> unacceptable to 'do nothing.'
>
> --> "I don't know what to do about the error, so I'll just ABEND/core
> dump/..., leaving rods in place so that the reactor melts down..."
>
> I *thought* that pulling out the control rods would cause reactions to stop,
> which may be incorrect.

I don't remember exactly how they work.

>
> *My* reaction would be to dump the heavy water, which would stop reaction in
> the pile at any of the reactors (CANDU) that I've visited, but I digress, as
> nuclear chemistry isn't the question here.

Yes, but here you are taking explicit action when there is an error.
The point was just propagating the error without doing anything about
it until much, much later, if at all.

>
> In any case, for nuclear applications, there is presumably a need to
> validate logic paths to a greater extent than is normal for less critical
> systems. The "meltdown" failure is a rather more catastrophic (and costly,
> by almost *any* metric, whether financial, legal, or radiological) event
> than most software failures.

But the scenario is possible since you can not or will not validate
all of them. And the failure of the Ariadne launch was caused by a
software bug, and that must have cost hundreds of millions of dollars.

In any case, I think we agree: Do something sensible (whatever that
might mean) when there are errors. This includes "do nothing" if that
makes sense.

This is moved way to far from lisp and scheme, so I'll be quiet now.

Ray

Jonathan Guthrie

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

In comp.lang.scheme R. Toy <rt...@mindspring.com> wrote:
> Christopher B. Browne wrote:

> > Which clearly establishes that there are multiple valid sorts of behaviour
> > for this.

> > In an "embedded" application (of whatever sort), it is utterly inappropriate
> > for the program to crash. If there is no mechanism to deal with recovery,
> > then it makes little sense to allow a crash. Hence, "propagate error, and
> > keep going."
> >

> Yes, I agree with multiple valid sorts of behavior.

> Let's see, I don't know what to do about the error, so I'll just
> continue and pull out the control rods, turn off the coolant, disconnect
> the operator panel and quietly keep going until meltdown. :-)

> However, I don't see much point of propagating the error. It seems to


> me that a reset and restart could hardly do worse.

Never done it, have you?

I used to program gas-flow computers for a living. We controlled big (36-
and 42-inch) transcontinental gas pipelines. Closing the wrong valve or
closing valves in the wrong order can result in explosions, fires, loss of
life, and so forth. We very carefully added methods to propagate errors
and not cause unit restarts.

Why? Because if the unit's calculations have failed, it clearly is
incapable of handling the situation. Quick: An input is out-of-range.
Should the valve open or close? There is no a priori way of knowing.
In that situation, you need to involve more resources than the local
system has available. If the unit is restarting, there is NOTHING
that an operator can do about it except notice that communications with
the unit has failed. He'll dispatch a truck (maybe) and they'll get
there in 2-6 hours. If the error is propagated, it can be reported to
the operator and he can take remote control over the unit and take
appropriate action.

That action may involve other units hundreds of miles apart, that the
original unit doesn't "know anything" about. Most often, what the
operator does is what the unit does in that situation: nothing. The
physical system is set up to be gracefully handle errors as well. This
is because mechanical parts fail more often than the programming does and
there is little the program can do if the valve decides that today it's
going to be closed.

This system works very well and is quite robust. (About the only thing I
ever saw cause a calculation failure was an open static pressure transducer
loop.) So, I've BTDT, and I really did get a T-shirt.

Raymond Toy

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

Jonathan Guthrie <jgut...@brokersys.com> writes:

> In comp.lang.scheme R. Toy <rt...@mindspring.com> wrote:

> > > In my opinion, the guys who did the C numeric extensions (I'm drawing a
> > > blank on the actual name) got it right. When you get an error, propagate
> > > that error, but keep the calculation going.
>
> > Yeah, I just really *LOVE* it when my simulation that's been running for
> > days finally finishes and says NaN for every answer. So, which one of
> > those billion or trillion flops caused the problem? Or worse yet,

[snip]


>
> So tell it to die. That's an available option. However, don't be surprised
> when the knowledge that the calculation went out of range HERE doesn't help
> you very much to figure out what the real problem is. Numeric errors tend
> to propagate quite a bit before they get far enough out of range to cause
> the system to barf.

But, surely, it's better to find out after the first few seconds than
after days of computations. At the very least I didn't waste days of
time.

In any case, I think we both agree that the appropriate actions should
be taken, whatever they may be.

Ray

Raymond Toy

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

Jonathan Guthrie <jgut...@brokersys.com> writes:

> In comp.lang.scheme R. Toy <rt...@mindspring.com> wrote:

> > Christopher B. Browne wrote:
>
> > > Which clearly establishes that there are multiple valid sorts of behaviour
> > > for this.
>
> > > In an "embedded" application (of whatever sort), it is utterly inappropriate
> > > for the program to crash. If there is no mechanism to deal with recovery,
> > > then it makes little sense to allow a crash. Hence, "propagate error, and
> > > keep going."
> > >
>

> > However, I don't see much point of propagating the error. It seems to
> > me that a reset and restart could hardly do worse.
>
> Never done it, have you?

Yes I have. In that case, the appropriate thing was to reset and
restart.

>
> I used to program gas-flow computers for a living. We controlled big (36-
> and 42-inch) transcontinental gas pipelines. Closing the wrong valve or
> closing valves in the wrong order can result in explosions, fires, loss of
> life, and so forth. We very carefully added methods to propagate errors
> and not cause unit restarts.

But here you very carefully decided how to handle errors. Perhaps, I
was mistaken, but my interpretation of "propagate error and keep
going" was no special handling of the error is done at all. Somewhat
like letting NaN keep generating more NaN until done. In your case
you've decided exactly what to do with your "NaN": keep going and let
someone else handle it.

Ray

Kenneth P. Turvey

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

On 23 Mar 1998 10:56:02 -0500, Raymond Toy <t...@rtp.ericsson.se> wrote:
>cbbr...@news.brownes.org (Christopher B. Browne) writes:
>
>>
>> If the program doesn't know how to recover from the error, it may still be
>> unacceptable to 'do nothing.'
>>
>> --> "I don't know what to do about the error, so I'll just ABEND/core
>> dump/..., leaving rods in place so that the reactor melts down..."
>>
>> I *thought* that pulling out the control rods would cause reactions to stop,
>> which may be incorrect.
>
>I don't remember exactly how they work.
>

The control rods are moderators. The absorb neutrons flying out of the
pile. You put the control rods into the pile to stop the reaction, and
pull them out to heat things up again.

If your program crashes it should definitely drop the control rods :-)

--
Kenneth P. Turvey <ktu...@pug1.SprocketShop.com>

The optimist thinks this is the best of all possible worlds. The
pessimist fears it is true.
-- Robert Oppenheimer

Thant Tessman

unread,
Mar 23, 1998, 3:00:00 AM3/23/98
to

Jonathan Guthrie wrote:

> [...] That, and the fact that there often is no in-program

> way to deal with the actual error (therefore many errors are
> best simply ignored) is the reason why "never test for an
> error that you don't know how to deal with" is such a common,

> and useful, attitude WRT error trapping. [...]

http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html

-thant

Martti Halminen

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Kenneth P. Turvey wrote:

> >> I *thought* that pulling out the control rods would cause reactions to stop,
> >> which may be incorrect.
> >
> >I don't remember exactly how they work.
> >
>
> The control rods are moderators. The absorb neutrons flying out of the
> pile. You put the control rods into the pile to stop the reaction, and
> pull them out to heat things up again.
>
> If your program crashes it should definitely drop the control rods :-)

Just hope that you know what type of reactor you have: the previous is
OK for pressurized water reactors, but boiling water reactors (at least
those used hereabouts) have the rod control mechanisms underneath the
reactor, so dropping the rods takes them out of the core!

Christopher B. Browne

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

On Tue, 24 Mar 1998 12:17:38 +0200, Martti Halminen <Martti....@dpe.fi> posted:

And then there are "heavy water" reactors where it's not the rods that
control things - it's the presence (or absence) of the heavy water.

If the program crashes --> dump the heavy water --> reaction stops.

Of course, this also means --> power goes off, so that you'd probably want
to make sure that the program isn't so buggy that this happens at stupid
times...

--
Those who do not understand Unix are condemned to reinvent it, poorly.

Michael Hobbs

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Christopher B. Browne wrote:
> And then there are "heavy water" reactors where it's not the rods that
> control things - it's the presence (or absence) of the heavy water.
>
> If the program crashes --> dump the heavy water --> reaction stops.

With all of these analogies flying around between programs and nuclear
reactor cores, it brings to mind the phrase "core dump". Hopefully, a
core dump in the program doesn't cause one in the reactor.

Another hopelessly off-topic thread: What is the origin of the word
"core" in "core dump"? I'm assuming it's a throwback to the days when
memory was composed of ferrite core; but I can't be certain, since I
wasn't around back then.

Charles Martin

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Michael Hobbs wrote:

>
> Another hopelessly off-topic thread: What is the origin of the word
> "core" in "core dump"? I'm assuming it's a throwback to the days when
> memory was composed of ferrite core; but I can't be certain, since I
> wasn't around back then.

Sure, go ahead, rub it in.

Yes, that's the reason. There are still folks around (like me) who find
themselves referring to the amount of "core memory" rather than RAM in
weak moments. And that's "ferrite coreS" -- even in the old days the
computers had more than one. :-)

Frank A. Adrian

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Charles Martin wrote in message <3517DEEB...@connix.com>...

>Yes, that's the reason. There are still folks around (like me) who find
>themselves referring to the amount of "core memory" rather than RAM in
>weak moments. And that's "ferrite coreS" -- even in the old days the
>computers had more than one. :-)

Yes, but not that many more...
--
Frank A. Adrian
First DataBank
frank_...@firstdatabank.com (W)
fra...@europa.com (H)
This message does not necessarily reflect those of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.


Raymond Toy

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

"Frank A. Adrian" <frank_...@firstdatabank.com> writes:

> Charles Martin wrote in message <3517DEEB...@connix.com>...
> >Yes, that's the reason. There are still folks around (like me) who find
> >themselves referring to the amount of "core memory" rather than RAM in
> >weak moments. And that's "ferrite coreS" -- even in the old days the
> >computers had more than one. :-)
>
> Yes, but not that many more...

In front of me is a card which contains 16K bytes of core memory.
(The card is about 1 ft square, the core itself is a small board about
5 in by 8 in.) I didn't count them, but I assume there are 16k*8 tiny
little core magnets on the board. I don't know how many such boards
were in the computer, but it's not exactly small number of "ferrite
coreS", but certainly not a large number for today's computer memory.

Ray

f o x a t . n y u . e d u

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Charles Martin <crma...@connix.com> writes:

> Michael Hobbs wrote:
>
> >
> > Another hopelessly off-topic thread: What is the origin of the word
> > "core" in "core dump"? I'm assuming it's a throwback to the days when
> > memory was composed of ferrite core; but I can't be certain, since I
> > wasn't around back then.
>
> Sure, go ahead, rub it in.
>

> Yes, that's the reason. There are still folks around (like me) who find
> themselves referring to the amount of "core memory" rather than RAM in
> weak moments. And that's "ferrite coreS" -- even in the old days the
> computers had more than one. :-)

But why were they called "ferrite cores"? They're donut shaped,
their cores are empty.

Dick Margulis

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to Raymond Toy

Raymond Toy wrote:
>
> "Frank A. Adrian" <frank_...@firstdatabank.com> writes:
>
> > Charles Martin wrote in message <3517DEEB...@connix.com>...
> > >Yes, that's the reason. There are still folks around (like me) who find
> > >themselves referring to the amount of "core memory" rather than RAM in
> > >weak moments. And that's "ferrite coreS" -- even in the old days the
> > >computers had more than one. :-)
> >
> > Yes, but not that many more...
>
> In front of me is a card which contains 16K bytes of core memory.
> (The card is about 1 ft square, the core itself is a small board about
> 5 in by 8 in.) I didn't count them, but I assume there are 16k*8 tiny
> little core magnets on the board. I don't know how many such boards
> were in the computer, but it's not exactly small number of "ferrite
> coreS", but certainly not a large number for today's computer memory.
>
> Ray


Ray,

Are you sure the card does not contain 16k bits, as opposed to bytes?
The IBM 1620 I worked on in the early 1960s had 20,000 6-bit BCD digits,
for a total of 120,000 bits. This occupied a cube-shaped array
approximately seven or eight inches on a side, give or take a bit. If
your card contains 128,000 or so bits, that would imply an eightfold
increase in packing density of ferrite cores somewhere between the 1620
and whatever machine your card came from, which seems somewhat
implausible given the nature of the beast.

Charles Martin

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Raymond Toy wrote:
>
> "Frank A. Adrian" <frank_...@firstdatabank.com> writes:
>
> > Charles Martin wrote in message <3517DEEB...@connix.com>...
> > >Yes, that's the reason. There are still folks around (like me) who find
> > >themselves referring to the amount of "core memory" rather than RAM in
> > >weak moments. And that's "ferrite coreS" -- even in the old days the
> > >computers had more than one. :-)
> >
> > Yes, but not that many more...
>
> In front of me is a card which contains 16K bytes of core memory.
> (The card is about 1 ft square, the core itself is a small board about
> 5 in by 8 in.) I didn't count them, but I assume there are 16k*8 tiny
> little core magnets on the board. I don't know how many such boards
> were in the computer, ....

One.

If you were lucky.

Charles Martin

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

David Fox wrote:
>
> Charles Martin <crma...@connix.com> writes:
>
> > Michael Hobbs wrote:
> >
> > >
> > > Another hopelessly off-topic thread: What is the origin of the word
> > > "core" in "core dump"? I'm assuming it's a throwback to the days when
> > > memory was composed of ferrite core; but I can't be certain, since I
> > > wasn't around back then.
> >
> > Sure, go ahead, rub it in.
> >
> > Yes, that's the reason. There are still folks around (like me) who find
> > themselves referring to the amount of "core memory" rather than RAM in
> > weak moments. And that's "ferrite coreS" -- even in the old days the
> > computers had more than one. :-)
>
> But why were they called "ferrite cores"? They're donut shaped,
> their cores are empty.
>

There's always one in any crowd.

Frank A. Adrian

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

Raymond Toy wrote in message <4n1zvr4...@rtp.ericsson.se>...
>were in the computer, but it's not exactly small number of "ferrite
>coreS", but certainly not a large number for today's computer memory.

That's what I'm comparing it to. Few of the newbies have had the joy of
trying to fit a simulation into 8K words of memory or know the joy of the
"huge expanse" of a 64K memory! Hell, if you really tried, you could get a
whole RTOS into less than 2K (of course it didn't do much more than handle
interrupts, directly access IO, and queue processes at that size, but the
fact that you could do it at all seems a miracle today). Now that we've
bored everyone to tears - I've always thought that a good way to keep
programs fast was to place artificial constraints on memory. It really
makes one focus on meory bandwidth (often THE bottleneck in today's
systems), structure sizes, etc. Has anyone intentionally tried this
approach? Also, lest others in these newsgroups think I've lost my mind,
let me state that I am quite aware that this approach also leads to a fair
amount of programmer pain as well as a marked decrease in programmer
productivity. It just seems like a tradeoff one could make if one wanted a
"fast" system rather than a system fast.

gregorys

unread,
Mar 24, 1998, 3:00:00 AM3/24/98
to

I maintaned a system about 15 years ago that,
if memory has not failed, 400,000 18bit words. they were in 5" square metal
boxes of 128,000 words each. These were microscopic cores. Still wander how
they were made?


--
greg...@one.net
Charles Martin wrote in message <35182E02...@connix.com>...


>Raymond Toy wrote:
>>
>> "Frank A. Adrian" <frank_...@firstdatabank.com> writes:
>>

>> > Charles Martin wrote in message <3517DEEB...@connix.com>...

>> > >Yes, that's the reason. There are still folks around (like me) who
find
>> > >themselves referring to the amount of "core memory" rather than RAM in
>> > >weak moments. And that's "ferrite coreS" -- even in the old days the
>> > >computers had more than one. :-)
>> >

>> > Yes, but not that many more...
>>
>> In front of me is a card which contains 16K bytes of core memory.
>> (The card is about 1 ft square, the core itself is a small board about
>> 5 in by 8 in.) I didn't count them, but I assume there are 16k*8 tiny
>> little core magnets on the board. I don't know how many such boards

Jens Kilian

unread,
Mar 25, 1998, 3:00:00 AM3/25/98
to

f o x @ c a t . n y u . e d u (David Fox) writes:
> But why were they called "ferrite cores"? They're donut shaped,
> their cores are empty.

You may think it's funny, but the question is legitimate. The ferrites are
called cores because the earliest ferrite memories had actual coils of wire,
and the ferrites were the *coils'* cores. Later models (with improved sense
amplifiers) had the wires running through the ferrites:

| /
| /
/\ /
/ \
-----\ \-----
\ /
/ \/
/ |
/ |

Bye,
Jens (who never used core memory, but at least *learned* about it.)
--
mailto:j...@acm.org phone:+49-7031-14-7698 (HP TELNET 778-7698)
http://www.bawue.de/~jjk/ fax:+49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]

vik...@cit.org.by

unread,
Mar 25, 1998, 3:00:00 AM3/25/98
to

In article <3517CD7B...@ccmail.fingerhut.com>,
mike....@ccmail.fingerhut.com wrote:

......

> Another hopelessly off-topic thread: What is the origin of the word
> "core" in "core dump"? I'm assuming it's a throwback to the days when
> memory was composed of ferrite core; but I can't be certain, since I
> wasn't around back then.
>

Here's a quote from the Jargon file:

:core: n. Main storage or RAM. Dates from the days of
ferrite-core memory; now archaic as techspeak most places outside
IBM, but also still used in the Unix community and by old-time
hackers or those who would sound like them. Some derived idioms
are quite current; `in core', for example, means `in memory'
(as opposed to `on disk'), and both {core dump} and the `core
image' or `core file' produced by one are terms in favor. Some
varieties of Commonwealth hackish prefer {store}.


Cheers,
Eugene

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading

Brent A Ellingson

unread,
Mar 26, 1998, 3:00:00 AM3/26/98
to

As near as I can tell, Thant supplied this article to support Jon's
statement.

This article refers to the failure of a test flight of an Ariane 5
rocket. From what I can tell, it failed partially because a software
module on the guidance system didn't test for exceptions during the
conversion of a 64-bit floating point number to a 16 bit int. This
was considered fine -- the module didn't do anything useful after
take off, and if it failed after that (which it did) it wasn't
going to affect anything. Real world, physical constraints on the
rocket prevented the values from going out of range before take off.

However the OS/hardware/whatever of the guidance system inexplicably
DID care, and when the exception was propogated back up to the OS,
it DID note the error. I quote from the article:

Although the source of the Operand Error has been identified,
this in itself did not cause the mission to fail. The
specification of the exception-handling mechanism also
contributed to the failure. In the event of any kind of
exception, the system specification stated that: the failure
should be indicated on the databus, the failure context
should be stored in an EEPROM memory (which was recovered
and read out for Ariane 501), and finally, the SRI processor
should be shut down.
...
There is reason for concern that a software exception should
be allowed, or even required, to cause a processor to halt
while handling mission-critical equipment. Indeed, the loss
of a proper software function is hazardous ... this resulted
in the switch-off of two still healthy critical units of
equipment.

In other words, why the hell did the equipment test for the exception
if it didn't know what to do with it, except crash and destroy the
rocket? Had it not tested for the exception on the non-critical
code, the rocket probably would not have failed.

Brent Ellingson
bell...@badlands.nodak.edu

Erik Naggum

unread,
Mar 26, 1998, 3:00:00 AM3/26/98
to

* Brent A Ellingson

| In other words, why the hell did the equipment test for the exception if
| it didn't know what to do with it, except crash and destroy the rocket?
| Had it not tested for the exception on the non-critical code, the rocket
| probably would not have failed.

it is amazing that the view that "don't test for errors you don't know
how to handle" is _still_ possible in the light of that report, but I
guess that's the way with _beliefs_. I cannot fathom how people will
read all the contradictory evidence they can find and still end up
believing in some braindamaged myths.

the problem is: the equipment did _not_ test for the exception. the
exception was allowed to propagate unchecked until the "crash-and-burn"
exception handler took care of it. this could be viewed as silly, but
the report clearly states why this was sound design: unhandled exceptions
should be really serious and should indicate random hardware failure.

the _unsound_ design was not in the exception handling at all, it was in
allowing old code from Ariane 4 still run in Ariane 5, notably code that
should run for a while into the launch sequence on Ariane 4 because it
would enable shorter re-launch cycles -- which was not necessary at all
on Ariane 5. the error was thus not in stopping at the wrong time or
under the wrong conditions -- it was in _running_ code at the wrong time
and under the wrong conditions.

"had it not run the bogus code, the rocket would not have failed in it."

how can you expect to learn from mistakes when you insist that the errors
you observe are caused by mistakes you think you have _already_ learned
from, and that others (the dumbass people who use exceptions, in this
case) are at fault for not learning from?

rather than "don't test for error you don't know how to handle", I
propose "don't run code with errors you aren't prepared to handle".

did you notice how the report had the brilliant insight that we have
gotten used to think that code is good until proven faulty and that this
was the major cultural problem pervading the whole design and deployment
process? it's high time this insighed could sink into the right people
and cause more focus on provably correct code and verification. with the
extremely arrogant attitude still held by Brent and many others like him,
we will continue to produce crappy code that crash rockets for millenia
to come without learning the problem really is: unchecked assumptions!

#:Erik
--
religious cult update in light of new scientific discoveries:
"when we cannot go to the comet, the comet must come to us."

Thant Tessman

unread,
Mar 26, 1998, 3:00:00 AM3/26/98
to

Brent A Ellingson wrote:
>
> Thant Tessman wrote:
> >
> > Jonathan Guthrie wrote:
> >
> > > [...] That, and the fact that there often is no in-program
> > > way to deal with the actual error (therefore many errors are
> > > best simply ignored) is the reason why "never test for an
> > > error that you don't know how to deal with" is such a common,
> > > and useful, attitude WRT error trapping. [...]
> >
> > http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html
> >
> > -thant
>
> As near as I can tell, Thant supplied this article to support Jon's
> statement.

I posted it because I think it demonstrates exactly the opposite.

> This article refers to the failure of a test flight of an Ariane
> 5 rocket. From what I can tell, it failed partially because
> a software module on the guidance system didn't test for
> exceptions during the conversion of a 64-bit floating point
> number to a 16 bit int. This was considered fine -- the module
> didn't do anything useful after take off, and if it failed after
> that (which it did) it wasn't going to affect anything.

The reason they didn't bother to catch the exception was not because the
module didn't do anything useful after takeoff, but because they were
working on the assumption that an exception would have indicated a
hardware problem in which case shutting down the unit would have been
the appropriate thing to do (to let the backup unit take over). The
fact that the software module wasn't actually serving any function at
the time was coincidental.

The engineers KNEW about the places in the code that could possibly
generate such an exception, and a conscious decision was made not to
deal with the critical section of code when converting the code from
Ariane 4 to Ariane 5. This, plus the fact that the valid input had
changed for Ariane 5, is what brought down the rocket.

-thant

Brent A Ellingson

unread,
Mar 27, 1998, 3:00:00 AM3/27/98
to

Erik Naggum wrote:
> did you notice how the report had the brilliant insight that we have
> gotten used to think that code is good until proven faulty and that this
> was the major cultural problem pervading the whole design and deployment
> process? it's high time this insighed could sink into the right people
> and cause more focus on provably correct code and verification. with the
> extremely arrogant attitude still held by Brent and many others like him,
> we will continue to produce crappy code that crash rockets for millenia
> to come without learning the problem really is: unchecked assumptions!

It is provably impossible to verify all code. This isn't myth -- this
is fact. Arrogant people like Erik keep believing the stuff they
learned in intro math classes at University isn't real, but simply a
bunch of myths. They will keep believing it is *possible* to verify
all code, and will keep continue to write crappy programs they
believe are "provably correct" and "can't" fail.

The single biggest mistake I see documented in the report was
the OS/hardware/whatever of the guidance system of the rocket was
designed and built on the assumption that the code it was running
was proven to be correct, and any software failures indicated a
critical problem. As a result, the OS/hardware of the guidance
system was built on the incorrect idea that it should catch all
the software errors, including the errors that were clearly not
critical and which it had no sensible mechanism to correct. This
resulted in the error being caught and dealt with in the stupidist
way imaginable -- the guidance system crashed, the rocket nozzle went
to full deflection, and the engines continued to burn until the
whole thing was physically ripped to pieces. That is an example of
catching an error you were definately better off ignoring.

The fact that *this* software failure was preventable only obscures
the fact that software failures, *in general*, are NOT preventable.
Trying to verify code is good. Mistakenly believing it is possible
to create "provably correct code" is like believing you can tell
the future by a combination of voodoo and looking at the guts of a
slaughtered goat. It can't be done. Period.

Whatever,
Brent Ellingson
bell...@badlands.nodak.edu

Erik Naggum

unread,
Mar 27, 1998, 3:00:00 AM3/27/98
to

* Brent A Ellingson

| It is provably impossible to verify all code. This isn't myth -- this is
| fact. Arrogant people like Erik keep believing the stuff they learned in
| intro math classes at University isn't real, but simply a bunch of myths.
| They will keep believing it is *possible* to verify all code, and will
| keep continue to write crappy programs they believe are "provably
| correct" and "can't" fail.

man, what did verification _do_ to you? and why do you have to make such
an incredibly stupid insult just to make yourself feel better? just
because you _obviously_ cannot write correct code doesn't mean those who
say they can, _and_ back up their position with a ten-year history of
code that just _doesn't_ fail, are frauds and liars. I wonder what hurt
you so badly, I really do, but I sure am glad it wasn't me. please make
sure you catch the guys, though -- your hostility is eating you up.

the Department of Informatics at the University of Oslo is perhaps _the_
pioneering site in verification. I can assure you that this stuff is not
"intro math classes", but you have nothing to learn from mistakes you
don't already know how to handle, right?

| That is an example of catching an error you were definately better off
| ignoring.

ok, so this _is_ the core credo of a religion with you, and I was in
error for ridiculing your religious beliefs. I'm really sorry.

| The fact that *this* software failure was preventable only obscures the
| fact that software failures, *in general*, are NOT preventable.

yeah, while you're predicting the future and are obviously infallible in
your own eyes, I'm arrogant. I think I'll stick with arrogant.

| Trying to verify code is good. Mistakenly believing it is possible to
| create "provably correct code" is like believing you can tell the future
| by a combination of voodoo and looking at the guts of a slaughtered goat.
| It can't be done. Period.

I feel deeply sorry for you, but I feel even sorrier for the poor people
who might hire you or otherwise stumble into your code.

Jon S Anthony

unread,
Mar 27, 1998, 3:00:00 AM3/27/98
to

Erik Naggum <cle...@naggum.no> writes:

> the problem is: the equipment did _not_ test for the exception. the
> exception was allowed to propagate unchecked until the "crash-and-burn"
> exception handler took care of it. this could be viewed as silly, but
> the report clearly states why this was sound design: unhandled exceptions
> should be really serious and should indicate random hardware failure.

Exactly.

> the _unsound_ design was not in the exception handling at all, it was in
> allowing old code from Ariane 4 still run in Ariane 5, notably code that
> should run for a while into the launch sequence on Ariane 4 because it
> would enable shorter re-launch cycles -- which was not necessary at all
> on Ariane 5. the error was thus not in stopping at the wrong time or
> under the wrong conditions -- it was in _running_ code at the wrong time
> and under the wrong conditions.

This is about the best succinct description of what went wrong and why
that I've seen.

> extremely arrogant attitude still held by Brent and many others like him,
> we will continue to produce crappy code that crash rockets for millenia
> to come without learning the problem really is: unchecked assumptions!

The last bit here "unchecked assumptions" is a precise, simple and
accurate anatomy of what the actual problem really was (and _is_ all
over the place in software "engineering"). Of course it is extremely
unlikely that people like Brent will ever clue into this as their own
assumptions are blinding them to it.


/Jon

--
Jon Anthony
Synquiry Technologies, Ltd., Belmont, MA 02178, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Christopher Browne

unread,
Mar 28, 1998, 3:00:00 AM3/28/98
to

On 27 Mar 1998 20:51:16 +0000, Erik Naggum <cle...@naggum.no> wrote:
>* Brent A Ellingson
>| It is provably impossible to verify all code. This isn't myth -- this is
>| fact. Arrogant people like Erik keep believing the stuff they learned in
>| intro math classes at University isn't real, but simply a bunch of myths.
>| They will keep believing it is *possible* to verify all code, and will
>| keep continue to write crappy programs they believe are "provably
>| correct" and "can't" fail.
>
> man, what did verification _do_ to you? and why do you have to make such
> an incredibly stupid insult just to make yourself feel better? just
> because you _obviously_ cannot write correct code doesn't mean those who
> say they can, _and_ back up their position with a ten-year history of
> code that just _doesn't_ fail, are frauds and liars. I wonder what hurt
> you so badly, I really do, but I sure am glad it wasn't me. please make
> sure you catch the guys, though -- your hostility is eating you up.
>
> the Department of Informatics at the University of Oslo is perhaps _the_
> pioneering site in verification. I can assure you that this stuff is not
> "intro math classes", but you have nothing to learn from mistakes you
> don't already know how to handle, right?

You're still left with the GIGO problems, and there's not just one of
them...

- If the program is "proven" to be correct by whatever means, but you
then toss incorrect data at it, results will be difficult to predict.

- The program cannot be more correct than the specifications used to
define what the program was supposed to do. If the program was written
to behave the wrong way, it doesn't matter if the internals are proven
to be "verified correct," the results of running the program will be
incorrect.

If you don't have time/opportunity to fully define the parameters of the
system that the program is supposed to somehow analyze or react to, then
you are quite limited as to how "verifyably correct" you can be.

- If my boss sneezes on a hankey, and says: "Here are the
specifications: Go write a program!" the value of verifying the
correctness of the program is rather limited.

- If my boss tells me to go write a program that will run on an
unreliable OS platform, it doesn't much matter how little or how much
verification work I do on my program, it won't likely prevent the system
from crashing.

It is quite appropriate to do some verification to reduce the number of
possible errors that I may be responsible for introducing into the
system; there will be some point of diminishing returns on such efforts
where the cost of the efforts exceed the expected returns.

--
Windows NT: The Mister Hankey of operating systems
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/lsf.html>

Rob Warnock

unread,
Mar 28, 1998, 3:00:00 AM3/28/98
to

Brent A Ellingson <bell...@badlands.nodak.edu> wrote:
+---------------

| It is provably impossible to verify all code. This isn't myth -- this
| is fact...
+---------------

Yes, but...

+---------------
| ...software failures, *in general*, are NOT preventable.
+---------------

Yes, but... (Part of the problem is that "failure" isn't always a well-
defined technical term.)

+---------------


| Trying to verify code is good. Mistakenly believing it is possible
| to create "provably correct code" is like believing you can tell
| the future by a combination of voodoo and looking at the guts of a
| slaughtered goat. It can't be done. Period.

+---------------

True, but not a reason to throw up one's hands and "give up".

While it is true that one cannot in general prove an arbitrary already-
written program correct or incorrect (it's equivalent to "the halting
problem", which is provably not solvable in general), you can *construct*
correct programs by deriving them from their proofs. (Look in the literature
for what Dijkstra & Gries &c were doing a decade ago.)

Which leaves one a choice: If you want provably-correct programs, you *can*
start with a proof and derive the program from it. Or you can say, "It's
too hard to construct proofs of programs big enough to be interesting"
[despite some examples to the contrary], and blithely code away in whatever
style you fancy, but have no assurance that you'll ever be able to prove
anything (one way *or* the other) about your code ex post facto. (Though
*some* programs can be proved/disproved ex post facto, your program just
might be one of those uncountable number for which the program prover
never halts.) Your choice.


-Rob

p.s. Actually, to me the "provability" argument is somewhat silly.
Sure, I'd like all my code to be "correct", but IMHO the *real* problem
is that "correct" can only be defined in terms of a specification, and
one thing's for *damn* sure, there's no human way to create "provably
correct" specifications! (Or if you think there is, change "specifications"
to "requirements". Regress as necessary until you get back to human wants
and needs that led to the project [whatever it is] being instigated.
Let's see you "prove" something about *those*!)

-----
Rob Warnock, 7L-551 rp...@sgi.com http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673 [New area code!]
2011 N. Shoreline Blvd. FAX: 650-933-4392
Mountain View, CA 94043 PP-ASEL-IA

Erik Naggum

unread,
Mar 28, 1998, 3:00:00 AM3/28/98
to

* Rob Warnock

| Actually, to me the "provability" argument is somewhat silly. Sure, I'd
| like all my code to be "correct", but IMHO the *real* problem is that
| "correct" can only be defined in terms of a specification, and one
| thing's for *damn* sure, there's no human way to create "provably
| correct" specifications! (Or if you think there is, change
| "specifications" to "requirements". Regress as necessary until you get
| back to human wants and needs that led to the project [whatever it is]
| being instigated. Let's see you "prove" something about *those*!)

this argument, which is presented quite frequently, is hard to refute
because it makes a number of assumptions that one needs to challenge at
their root, not in their application. I'll try, in no particular order.

one assumption is that _all_ code should be provably correct, and that in
the face of very hard cases one can somehow deduce something about the
general provability issues. this is not so. an important argument in
verifiable programming is that that which is encapsulated should be
verified so you can trust your abstractions. without this requirement,
there is no upper bound to the complexity of proving anything, and those
who argue against verification (or its costs) frequently assume that they
will always deal with unverified code. this is obviously not the case --
people who have worked with these things for years have most probably
done something pretty smart to make their work bear fruit, and so the
often quite silly arguments about the fruitlessness of their endeavor are
only true if you never verify anything. this is somewhat like asking
"but how would the stock market work if the government set all prices?"

another assumption is that specifications need to be as precise as the
program is. this is not so. code contains a lot of redundancy of
expression and information and little expression of intent and purpose.
specifications should contain what the code does not. in particular,
preconditions and postconditions can be expressed (and checked) by other
means than computing the value. invariants that are not expressed in
code need to be provably maintained. given such intents of the system
that never get expressed in code, a specification can be fairly simple,
and inconsistencies in a specification are far easier to spot than in a
program.

yet another assumption is that code would be allowed to be written with
the same messy interaction of abstraction levels that programmers tend to
use today. this is not going to be allowed. to make verification work,
more information must be made explicit, while in most languages amenable
to verification, the redundancy in information is minimized. this latter
point is also worth underlining: not just any language can be subject to
verification. C and C++, for instance, have so complex semantics that it
is very hard to figure out what is actually going on unless the code is
exceptionally cleanly written.

creating correct software is a lot easier than creating buggy software
that works. however, if you start with buggy methodologies you'll never
obtain correct software, and you might mistakenly believe that correct
software is therefore impossible.

Rob Warnock

unread,
Mar 29, 1998, 3:00:00 AM3/29/98
to

Erik Naggum <cle...@naggum.no> wrote:
+---------------

| * Rob Warnock
| | Actually, to me the "provability" argument is somewhat silly. Sure, I'd
| | like all my code to be "correct", but IMHO the *real* problem is that
| | "correct" can only be defined in terms of a specification, and one
| | thing's for *damn* sure, there's no human way to create "provably
| | correct" specifications! ...

|
| this argument, which is presented quite frequently, is hard to refute
| because it makes a number of assumptions that one needs to challenge at
| their root, not in their application.
+---------------

O.k., so I flippantly overstated the case. (But it *was* in a postscript to
an article [not quoted] in which I actually made the case *for* constructing
correct programs from their proofs, so I *do* have some respect for provability
concerns...]

+---------------


| another assumption is that specifications need to be as precise as the
| program is. this is not so.

+---------------

Not as "precise", but certainly as "correct", yes? You won't deny, I hope,
that incorrect specifications usually lead to incorrect functioning of the
total system, *especially* when the code is proven to implement the
specification!

+---------------


| specifications should contain what the code does not. in particular,
| preconditions and postconditions can be expressed (and checked) by other
| means than computing the value. invariants that are not expressed in
| code need to be provably maintained.

+---------------

The problem I was noting in that postscript was that in attempting to
prove *total system* correctness [as opposed to proving correctness of
an encapsulated library component, which is often fairly straightforward]
one eventually must regress (step back) to the initial human desires
that led to the specification -- whereupon one runs smack bad into
the "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"),
which at its root contains the conundrum that much of the time we
humans don't actually *know* exactly what we want!

+---------------


| creating correct software is a lot easier than creating buggy software
| that works. however, if you start with buggy methodologies you'll never
| obtain correct software, and you might mistakenly believe that correct
| software is therefore impossible.

+---------------

We violently agree. However, I was trying to warn that that *still*
isn't enough to prevent disasters, since the best you'll ever get
with the best methodologies is code whose behavior meets the originally
stated goals... WHICH MAY HAVE BEEN WRONG.

Yet I think we also agree that the truth of this point is no excuse
for not using the best methodologies we have access to *anyway*...


-Rob

Erik Naggum

unread,
Mar 29, 1998, 3:00:00 AM3/29/98
to

* Rob Warnock

| Not as "precise", but certainly as "correct", yes? You won't deny, I
| hope, that incorrect specifications usually lead to incorrect functioning
| of the total system, *especially* when the code is proven to implement
| the specification!

no, I won't deny that, but there is an important difference between what
constitutes a correct program and a correct specification: the latter
must not contain inconsistencies or conflicts. a program must be allowed
to contain inconsistencies and conflicts because it is impossible to do
everything at once. since a specification is a statement of the static
properties of a program, and a program's execution is a dynamic process,
the types of incorrectness that can occur are vastly different in nature.
this all leads to simpler (to express and implement, anyway) requirements
on specifications than on programs. since the program should now be
derived from the specification, we have removed a tremendous fraction of
the randomness in the way humans think and process information.

writing specifications, however, is much harder than writing programs,
but at least you can always know whether it is internally consistent or
not.

| The problem I was noting in that postscript was that in attempting to
| prove *total system* correctness [as opposed to proving correctness of an
| encapsulated library component, which is often fairly straightforward]
| one eventually must regress (step back) to the initial human desires that
| led to the specification -- whereupon one runs smack bad into the
| "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"), which
| at its root contains the conundrum that much of the time we humans don't
| actually *know* exactly what we want!

oh, yes. total system correctness is often meaningless even if it can be
proven just for this reason. I see that we don't disagree on much, but I
have become wary of the many paople who argue against proving correctness
of components because of the human factors in "satisfiability" overshadow
any correctness properties of a system at some point close to the users.

| We violently agree. However, I was trying to warn that that *still*
| isn't enough to prevent disasters, since the best you'll ever get with
| the best methodologies is code whose behavior meets the originally stated
| goals... WHICH MAY HAVE BEEN WRONG.

we violently agree, indeed.

| Yet I think we also agree that the truth of this point is no excuse
| for not using the best methodologies we have access to *anyway*...

precisely, and to wrap this up: I think the report from the Ariane 5
failure was incredibly intelligent and honest about the issues they were
involved in. would that similar efforts would be undertaken when
software less critical also fails. there is a lot to learn from mistakes
that we probably never will learn from until we get rid of the obvious
mistakes that we _believe_ we know how to handle. I'm reminded of a
"definition" of insanity that might apply to programming: to keep doing
the same thing over and over while expecting different results.

the irony of this whole verifiable programming situation is that we are
moving towards the point where we can prove that human beings should not
write software to begin with, and we let computers program computers.
however, as long as C++ and similar repeatedly-unlearned-from mistakes
hang around, programmers will still be highly paid and disasters will
come as certainly as sunrise.

Brent A Ellingson

unread,
Mar 30, 1998, 3:00:00 AM3/30/98
to

Jon S Anthony (j...@synquiry.com) wrote:
: Erik Naggum <cle...@naggum.no> writes:

: > the problem is: the equipment did _not_ test for the exception.
the
: > exception was allowed to propagate unchecked until the
"crash-and-burn"
: > exception handler took care of it. this could be viewed as silly,
but
: > the report clearly states why this was sound design: unhandled
exceptions
: > should be really serious and should indicate random hardware
failure.

: Exactly.

First, the report clearly says that the "crash-and-burn" exception
handler
(which *is* an exception handler, but a damned bad one) was NOT sound
design,
and they offer an alternative (I'm quoting from the report posted at
http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html):

Although the source of the Operand Error has been identified,
this in itself did not cause the mission to fail. The specification
of the exception-handling mechanism also contributed to the

failure ...

... It was the decision to cease the processor operation which
finally proved fatal ...

... For example the computers within the SRIs could have
continued to provide their best estimates of the required
attitude information. There is reason for concern that a


software exception should be allowed, or even required, to
cause a processor to halt while handling mission-critical

equipment ...

Second, we all seem to agree with the review board on this point:

The Board is in favour of the ... view, that software
should be assumed to be faulty until applying the currently
accepted best practice methods can demonstrate that it is
correct.

But none of us seems to agree with what they mean by this. My feeling
is that this implies the the OS/hardware/whatever of the guidance system
should have been designed on the assumption that the code it was running
was *not* proven correct, but rather that the code it was running *may*
be faulty.

Other people believe it implies that the the engineers should have
verified that the code was correct before it was allowed to fly.

Hopefully, I'm not the only person that realizes there is a middle
ground there. However, I am taken aback both by Erik's idea that the
OS of the guidance system may have been designed properly and by his
idea that I am somehow advocating that the software module was
designed properly.

It is abundantly clear that both the OS *and* the software module
were designed incorectly -- the software should have been verified,
but the OS should *not* have assumed it was verified.

Last,

: > extremely arrogant attitude still held by Brent... will continue
: > to produce crappy code...

: Of course it is extremely unlikely that people like Brent will ever

: clue into this as their own assumptions are blinding them to it.

I hate to break in on your little mating dance, but I don't know
where you get off making any of these assumptions about me.

I didn't write that assumptions should should go uncheck, or that
all errors should be ignored, or even that ignoring errors is in
general the best policy. What I remember writing is that this report
makes it pretty clear that ignoring *this* error in *this* situation
would have been a *better* policy than allowing non-critcal code
to crash both the main and backup guidance systems of the rocket,
ultimately causing the rocket to be torn to shreds by the atmosphere.

Whatever,
Brent Ellingson
bell...@badlands.nodak.kedu
[bellings@cx06 bellings]$

Frank Adrian

unread,
Mar 30, 1998, 3:00:00 AM3/30/98
to

Christopher Browne wrote in message <6fhjbk$4o$1...@blue.hex.net>...

>
>You're still left with the GIGO problems, and there's not just one of
>them...
>
>- If the program is "proven" to be correct by whatever means, but you
>then toss incorrect data at it, results will be difficult to predict.

Well, if you take the specification down to the possible characters on an
input file stream or the possible events coming from the system, you can
prove that you've either (a) handled improper input or (b) chosen to ignore
it. But you can't state "I've handled the error cases" and then let them
fail.

>- The program cannot be more correct than the specifications used to
>define what the program was supposed to do. If the program was written
>to behave the wrong way, it doesn't matter if the internals are proven
>to be "verified correct," the results of running the program will be
>incorrect.

Actually, this shows a lack of knowledge of what program proofs are supposed
to provide. Yes, if the spec is incomplete or inconsistent, a proof for
this spec will also have problems. But guess what! By trying to do the
proof based on the spec, you've shown that the spec IS incomplete or
inconsistent. More importantly, you've usually shown HOW it's incomplete or
inconsistent so that the specification can be modfied to handle these cases.

>If you don't have time/opportunity to fully define the parameters of the
>system that the program is supposed to somehow analyze or react to, then
>you are quite limited as to how "verifyably correct" you can be.

Well, if you don't have time to produce a correct system, then all bets are
off anyway. Lack of time makes us all stupid. But I'd say that formal
verification will help deliver "more correct" code for a given time period
than a seat of the pants method. And, if I had a white-box test opportunity
and proof tools, I could find test cases that break the code (or else a
proof that it does work as stated). In time, if you don't let my white-box
test find the bugs, some user will. No seat of the pants method approaches
that level of certainty.

>- If my boss sneezes on a hankey, and says: "Here are the
>specifications: Go write a program!" the value of verifying the
>correctness of the program is rather limited.

Of course, the simple answer is that you should get a better boss :-). OTOH,
proof techniques will show you it's the boss' hankie that's in error and not
your program (not that a boss like that would be happy having this pointed
out).

>- If my boss tells me to go write a program that will run on an
>unreliable OS platform, it doesn't much matter how little or how much
>verification work I do on my program, it won't likely prevent the system
>from crashing.

Actually, it may. Code that is proven would probably not stress the system
as much as code that was throwing bogus pointers, leaking memory, and, in
general, making a nuisance of itself.

>It is quite appropriate to do some verification to reduce the number of
>possible errors that I may be responsible for introducing into the
>system; there will be some point of diminishing returns on such efforts
>where the cost of the efforts exceed the expected returns.

I don't think that anyone denies this. It's just that most systems today
are done with NO verification and that is touted as a "good thing".

Erik Naggum

unread,
Mar 30, 1998, 3:00:00 AM3/30/98
to

* Brent A Ellingson

| I didn't write that assumptions should should go uncheck, or that all
| errors should be ignored, or even that ignoring errors is in general the
| best policy. What I remember writing is that this report makes it pretty
| clear that ignoring *this* error in *this* situation would have been a
| *better* policy than allowing non-critcal code to crash both the main and
| backup guidance systems of the rocket, ultimately causing the rocket to
| be torn to shreds by the atmosphere.

I am unable to understand that you are _not_ saying that the cause of the
failure was that there should have been an exception handler (that did
nothing) for this situation, but wasn't, and this flies in the face of
the gist of the report, which was, and I repeat myself: that the bug was
to let code run that should not have run to begin with.

ObCL: IGNORE-ERRORS is an exception handler.

Ray Dillinger

unread,
Mar 30, 1998, 3:00:00 AM3/30/98
to

Frank A. Adrian wrote:
> - I've always thought that a good way to keep
> programs fast was to place artificial constraints on memory. It really
> makes one focus on meory bandwidth (often THE bottleneck in today's
> systems), structure sizes, etc. Has anyone intentionally tried this
> approach? Also, lest others in these newsgroups think I've lost my mind,
> let me state that I am quite aware that this approach also leads to a fair
> amount of programmer pain as well as a marked decrease in programmer
> productivity. It just seems like a tradeoff one could make if one wanted a
> "fast" system rather than a system fast.


What you are describing is "classicist" style programming, and yes
there are sowe of us who do it for fun and profit. One of my creations
was ttte, the "teeny tiny text editor". It was a full-screen editor.
It ran in 8k. I haven't used it in two years.

Actually the style is more "neoclassical" than "classical" -- some
things that were done to save space way back when are manifestly
bad ideas best forgotten, and the "classicists" leave them behind.

Bear

Rob Warnock

unread,
Mar 31, 1998, 3:00:00 AM3/31/98
to

Ray Dillinger <be...@sonic.net> wrote:
+---------------

| Frank A. Adrian wrote:
| > - I've always thought that a good way to keep
| > programs fast was to place artificial constraints on memory...

|
| What you are describing is "classicist" style programming, and yes
| there are sowe of us who do it for fun and profit...
+---------------

I've always asserted[*] that the reason placing "artificial" constraints on
*any* aspect of one's program seems to improve the quality is that it forces
one to look at the code more than once! ...which so often people don't. Their
code is "write-only".

It is in the process of "tuning" (and I really don't care *which* parameter
you're tuning) that one re-reads the whole program, and it is during this
re-reading that one discovers the *really* significant beneficial changes --
usually major algorithm changes.

This is the same reason that programs that are initially "buggy" often end
up having (after they're debugged, that is) better performance or memory
utilization than programs that worked the first time. [Note: Initially
buggy programs *don't* tend to have fewer post-delivery errors -- quite
the reverse!]


-Rob

[*] Back when I was chairing the DECUS DEC-10 SIG on Implementation Languages
(which mostly meant BLISS, in those days), I formulated the only "law" upon
which I've ever had the temerity to place my name:

Warnock's Law For Why BLISS Programs Are So Big: It's because they
mostly work the first time, and so they're never debugged. And since
it's normally during the process of debugging that the programs are
*read* (or reviewed) and the major algorithmic changes made that save
substantial memory, these changes are not made, either. So you're
left with a properly working but bloated program.

[Historical note: At the time this was first put forth, the primary language
for writing both the TOPS-10 operating system and all of the system utilities
was *assembler* (MAACRO-10).]

0 new messages