Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Lisp" is poison for NSF research $$$

54 views
Skip to first unread message

Henry Baker

unread,
Jun 2, 1995, 3:00:00ā€ÆAM6/2/95
to
From the anonymous reviews of a recent NSF proposal in which Lisp was
mentioned, but only as an incidental tool:

"The LISP environment is really getting out of date as a viable system
environment. Let us not pursue this line of research any more."

and

"The investment may be a wasteful thing for the taxpayers."

----------------

My thought:

"C & C++ are God's way of telling Lispers that they're too productive".

--
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html

David Neves

unread,
Jun 2, 1995, 3:00:00ā€ÆAM6/2/95
to
In article <hbaker-0206...@192.0.2.1>, hba...@netcom.com (Henry
Baker) wrote:

: From the anonymous reviews of a recent NSF proposal in which Lisp was


: mentioned, but only as an incidental tool:
:
: "The LISP environment is really getting out of date as a viable system
: environment. Let us not pursue this line of research any more."

Amazing. Anything should be allowed as an incidental tool. A researcher
has to pick the best language for his or her group. Groups put a lot of
effort in developing a good tool set for the language they work with. For
an external reviewer to base his/her decision on an incidental tool is
stepping out of bounds. Faulting a dynamic language is being particularly
insensitive to prototyping needs of research. The reviewer is probably
someone who still views Lisp as the Lisp 1.5 that some programming
language texts cover.

BjĆørn Remseth

unread,
Jun 2, 1995, 3:00:00ā€ÆAM6/2/95
to

> "C & C++ are God's way of telling Lispers that they're too productive".

(< ((Y (lambda (c) (++ c))) C)
lisp) ;-)

--

(Rmz)

Bj\o rn Remseth !Institutt for Informatikk !Net: r...@ifi.uio.no
Phone:+47 22855802!Universitetet i Oslo, Norway !ICBM: N595625E104337

Michael McIlrath

unread,
Jun 3, 1995, 3:00:00ā€ÆAM6/3/95
to hba...@netcom.com
hba...@netcom.com (Henry Baker) wrote:
>From the anonymous reviews of a recent NSF proposal in which Lisp was
>mentioned, but only as an incidental tool:
>
>"The LISP environment is really getting out of date as a viable system
>environment. Let us not pursue this line of research any more."
>
>and
>
>"The investment may be a wasteful thing for the taxpayers."
>

Well, I have a simple solution for that, I just don't tell them what my
"incidental tools" are going to be.

If you are in engineering, you might be able to get some leverage out of the
fact that the CAD Framework Initiative standard extension language is scheme.

Michael McIlrath <m...@mit.edu>


Michael McIlrath

unread,
Jun 3, 1995, 3:00:00ā€ÆAM6/3/95
to hba...@netcom.com

Marco Antoniotti

unread,
Jun 3, 1995, 3:00:00ā€ÆAM6/3/95
to
In article <neves-02069...@neves.ils.nwu.edu> ne...@ils.nwu.edu (David Neves) writes:


From: ne...@ils.nwu.edu (David Neves)
Newsgroups:
comp.lang.lisp,comp.lang.lisp.franz,comp.lang.lisp.x,comp.lang.clos
Date: Fri, 02 Jun 1995 09:26:12 -0500
Organization: The Institute for the Learning Sciences
Lines: 16
Distribution: inet
References: <hbaker-0206...@192.0.2.1>
NNTP-Posting-Host: neves.ils.nwu.edu
Xref: cmcl2 comp.lang.lisp:18263 comp.lang.lisp.franz:439 comp.lang.lisp.x:1495 comp.lang.clos:3027

In article <hbaker-0206...@192.0.2.1>, hba...@netcom.com (Henry
Baker) wrote:

: From the anonymous reviews of a recent NSF proposal in which Lisp was
: mentioned, but only as an incidental tool:
:
: "The LISP environment is really getting out of date as a viable system
: environment. Let us not pursue this line of research any more."

Amazing. Anything should be allowed as an incidental tool. A researcher
has to pick the best language for his or her group. Groups put a lot of
effort in developing a good tool set for the language they work with. For
an external reviewer to base his/her decision on an incidental tool is
stepping out of bounds. Faulting a dynamic language is being particularly
insensitive to prototyping needs of research. The reviewer is probably
someone who still views Lisp as the Lisp 1.5 that some programming
language texts cover.

I would say *almost all* the programming languages book. If you fish
around for "programming Language Books", very few even *acknowledge*
the presence of Common Lisp and - when they are up to it - use some
subset of Scheme to show the power of recursion and list
processing. The fact that both CL and Scheme have arrays and vectors
does not even breeze near the author(s). Think what happens to
CLOS. :(

Cheers

--
Marco G. Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht

Ken Anderson

unread,
Jun 4, 1995, 3:00:00ā€ÆAM6/4/95
to
In article <3qnek3$m...@Yost.com> yo...@Yost.com (Dave Yost) writes:
>Amazing. Anything should be allowed as an incidental tool. A researcher
>has to pick the best language for his or her group. Groups put a lot of
>effort in developing a good tool set for the language they work with. For
>an external reviewer to base his/her decision on an incidental tool is
>stepping out of bounds. Faulting a dynamic language is being particularly
>insensitive to prototyping needs of research. The reviewer is probably
>someone who still views Lisp as the Lisp 1.5 that some programming
>language texts cover.

Denial.

I think Lisp implementers should take this as a wake-up call.

Everyone, not just implemntors need to hear this.

There are other warnings.
* Lucid went out of business

Why was that? The hearsay says the C++ side brought the house down.

* CMUCL was abandoned, and the people are working on Dylan

Perhaps they did not believe that Lisp had the seeds for its own renewal.

* MCL was abandoned for 2 years before being revived
* The GARNET project has left lisp behind and has gone to C++.
It's now 3 times faster, and more people are interested in it.

This "3" is an interesting number, where does it come from? While people
often mention a factor of 10 difference between Lisp and C, the Lisp
version is usually junk (sorry i only have two examples of this). The
analysis i've done on admitedly small programs, suggests reasonable factors
are less than 1.5. Garnet is a sizable chunk of code, so as an example of
a real application, understanding the performance and other differences
between the Lisp and C++ versions would be very valuable to us. I can find
the Garnet Lisp code from the Lisp Faq (ftp a.gp.cs.cmu.edu
/usr/garnet/garnet), but where is the C++ code?

Surely there are many others.

1. Any expert system shell that is now in C or C++.

2. Anyone who would rather be programming in Lisp than what they are
programming in. Here are two of the projects i'm working on:

2P1: Coding in C++, even though i'm implementing a very simple scheme like
parallel language (without scheme syntax). I prototype in Lisp and hand
code a threaded interpreter that uses C++ objects at runtime.

2P2: Coding in Scheme, although Lisp would be more effective. This is
because Lisp is "tainted" which was Henry's point, and Scheme can be hidden
inside "a C application". Also, the Scheme is flexible enough for those
who realize they want it on the inside.

As far as I can tell, ANSI lisp is being treated as a huge
plateau, as if there is nothing interesting left to do, or
as if any further changes would be too hard to negotiate.

What about speed? size? C/C++ interoperability?

I think users and vendors working together can do a lot here.

These issues have been untreated emergencies for some years now.

It is easy to say "we did that 10 years ago in Lisp". However, it doesn't
mean much if we can't deliver it to the people who want it now. We need to
collect and focus our capabilities so we can.
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

John Atwood

unread,
Jun 5, 1995, 3:00:00ā€ÆAM6/5/95
to
In article <KANDERSO.9...@bitburg.bbn.com>,

Ken Anderson <kand...@bitburg.bbn.com> wrote:
>In article <3qnek3$m...@Yost.com> yo...@Yost.com (Dave Yost) writes:
> * The GARNET project has left lisp behind and has gone to C++.
> It's now 3 times faster, and more people are interested in it.
>
>This "3" is an interesting number, where does it come from? While people
>often mention a factor of 10 difference between Lisp and C, the Lisp
>version is usually junk (sorry i only have two examples of this). The
>analysis i've done on admitedly small programs, suggests reasonable factors
>are less than 1.5. Garnet is a sizable chunk of code, so as an example of
>a real application, understanding the performance and other differences
>between the Lisp and C++ versions would be very valuable to us. I can find
>the Garnet Lisp code from the Lisp Faq (ftp a.gp.cs.cmu.edu
>/usr/garnet/garnet), but where is the C++ code?

the number 3 comes from the Garnet/Amulet development team.
In the Garnet FAQ
(http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ),
they answer the question, Why switch to C++, they list political &
technical reasons. One technical reason:

* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
but it was still pretty slow on "conventional" machines. The initial
version of the C++ version, with similar functionality, appears to be
about THREE TIMES FASTER than the current Lisp version without any
tuning at all.


The C++ code is now available (Amulet alpha 0.2) at:
http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html


John
--
_________________________________________________________________
Office phone: 503-737-5583 (Batcheller 349);home: 503-757-8772
Office mail: 303 Dearborn Hall, OSU, Corvallis, OR 97331
_________________________________________________________________

Bulent Murtezaoglu

unread,
Jun 5, 1995, 3:00:00ā€ÆAM6/5/95
to
>>>>> "Dave" == Dave Dyer <dd...@netcom.com> writes:

Dave> I have to agree with Dave Yost; In many respects, modern
Dave> C/C++/Visual Basic development environments rival or exceed
Dave> the best lisp has to offer. The underlying language is
Dave> still crap, but the gloss on top of it demos really well;
Dave> and truthfully, goes a long way toward improving
Dave> productivity.

"Demoing well" does not translate into productivity. People usually don't
complain about Lisp development evironments (which are very good actually),
but about efficiency/image size/common GUI/resource needs etc. Given the
memory/CPU power available on low end machines, and the increasing
sophistication of off-the-shelf PC operating systems (OS/2 and windows
promises), it might only be a matter of time before someone comes out with
a Turbo-Lisp at a reasonable price ($200 or so). Note, though, that
visual basic is not really in the same league. The way I have seen it used
is for cute and flashy windows programs that don't do anything requiring
remotely sophisticated or unusual algorithms. A turbo-Lisp could do all
visual Basic could do and more, but most people would not bother to learn
Lisp when when they can do all they want in Basic. If this is a problem at
all, it has more to do with the local (US) pop computer/PC culture than Lisp.

Dave> Despite many millions that went into Symbolics, LMI, TI and
Dave> Xerox (both directly and to their customers) there is not
Dave> *ONE* really well known "lisp" success story to point to;
Dave> and on the flip side, everybody knows how much was invested
Dave> in those companies, and where they are now. [...]

Hmmm. Off the top of my head I'd say Emacs, AutoCAD, and symbolic math
systems with Lispy engines inside. Sure, MS Word and Excel aren't written
in Lisp, and they sell well and and serve their intended purpose but when
you try to do anything "unusual" with them you realize how crippled they
really are underneath that polished look. On the language vendor side, both
Franz and Harlequin seem to be doing well and according to the rumors on the
net it wasn't the Lisp business that brought about Lucid's demise.

cheers,

BM

Marty Hall

unread,
Jun 5, 1995, 3:00:00ā€ÆAM6/5/95
to
In article <KANDERSO.9...@bitburg.bbn.com>
kand...@bitburg.bbn.com (Ken Anderson) writes:
[...]

> * Lucid went out of business
>
>Why was that? The hearsay says the C++ side brought the house down.

I think this is more than hearsay. The Lisp side of the house was
still making money, but they went far into debt to get into the
Energize and C++ business. When these businesses didn't make a profit,
the investors pulled the plug. One of their senior guys referred to it
as "Albatrossergize".

Be that as it may, I agree very much with Dave Yost that we should
take very seriously the real and perceived problems with Lisp, both on
the technical and political ends. The issues Dave mentioned should not
just be brushed aside in a defend-Lisp-at-all-costs fervor, as
sometimes happens on this group.

On a more upbeat note, here at the Johns Hopkins University
Applied Physics Lab, we have several Navy, ARPA, and Army projects
that make significant use of Lisp. The role of Lisp has gone over
pretty well with internal management and sponsors, even for the newly
starting programs. But JHU/APL is not mainstream industry. More
stories are needed like AT&T using Lisp (and AllegroStore) for their
Global BroadBand 2000 switching system.
- Marty
(proclaim '(inline skates))

Richard M. Alderson III

unread,
Jun 6, 1995, 3:00:00ā€ÆAM6/6/95
to
In article <3r20a2$m...@Yost.com> yo...@Yost.com (Dave Yost) writes:

>Perhaps someone could do a survey people who still remember what they went
>through to transition from C to Lisp and collect a list of things they found
>difficult or hard to get used to.

Or whatever language(s) you knew prior to learning Lisp; some of us still don't
know C well enough to generate it, though we may be able to read it.

By the time I learned Lisp, I had already worked out all the difficulties with
recursion and pointers by (a) having worked with PL/1 area variables and the
like, and (2) having written a recursive-descent compiler for Pascal. I'd
already been writing 360/370 assembler for years, so had little difficulty with
the SET vs. SETQ concepts; PDP-10 assembler clarified CAR/CDR *implementation*,
but the concepts were already there.

The biggest problem was using Weissman's book, and the McCarthy manual, with a
PDP-10 version of Standard Lisp (before I graduated to MACLISP): Quoting top-
level forms wasn't necessary in the first Lisp I used (LISP 360).

Actually, the biggest problem was using Weissman's book at all. All that time
spent on dotted-pair notation, instead of on application of Lisp itself. It
only stopped being a toy for me when the first edition of Winston & Horn came
into the Math library.

So I don't think the problems are with C vs. Lisp, but with the usual level of
preparation given to programmers with respect to data structures at all.

Just my not so damned humble opinion, of course.
--
Rich Alderson You know the sort of thing that you can find in any dictionary
of a strange language, and which so excites the amateur philo-
logists, itching to derive one tongue from another that they
know better: a word that is nearly the same in form and meaning
as the corresponding word in English, or Latin, or Hebrew, or
what not.
--J. R. R. Tolkien,
alde...@netcom.com _The Notion Club Papers_

Kenneth D. Forbus

unread,
Jun 6, 1995, 3:00:00ā€ÆAM6/6/95
to
My experience was different. I recently received funding for a large
education project from NSF. Here is how I described my computing
plans:


"We plan to use Pentium PCā€™s for all development purposes,
for several reasons. First, although they are powerful enough
to serve as a development platform, we expect machines such as
these will be widespread in in high schools within five years.
Second, our collaborators at Oxford have easy access to 486ā€™s
and Pentiums, and both universities can, if early experiments are
successful, justify outfitting additional laboratory machines with
the additional RAM and peripherals required to run our software.
Third, there is a greater wealth of software libraries and toolkits
on this platform that can be used to build high-quality interfaces,
visualization tools, and the other types of necessary software far
more easily than we could otherwise And finally, we have found
software tools that allows us to produce royalty-free runtime
systems, which simplifies shipping experimental software to our
collaborators at Oxford and allows us to test our software more
widely than would otherwise be legally possible.

We will use Franz Common Lisp for Windows, with the Common Lisp
Interface Manager (CLIM), for much of our programming work.
The continued use of high-level tools when possible will maximize
our productivity, and we can port our workstation-based software
most easily that way. We will use C++ for numerically-intensive
portions of the virtual laboratories to maximize performance. For
interface and visualization tool development we will use a
combination of Visual Basic, C++, and CLIM, so that we can gain
leverage from the use of third-party libraries, including C++
libraries (such as the Quinn-Curtis graphing & plotting library)
and the plethora of Visual Basic-based interface building kits.
We also will
investigate integration with commercial symbolic mathematics
packages, particularly Matlab because it is currently used at
Northwestern in teaching control theory, to supply analysis tools
for one version of the Feedback Articulate Virtual Laboratory."

Reviewers were sympathetic to the use of Common Lisp for prototyping
and rapid development. They also thought that relying on third-party
software was a good idea (why write surface plotters, for instance,
when you can buy them?). I'm developing as much as possible in
ACLPC, using its DLL & DDE facilities to access 3rd party stuff.
The ability to make stand-alone executables without royalties is,
for my purposes, a major win.

So, Lisp isn't necessarily poison at NSF.
There is always alot of variance
in reviewers...

Ken


hba...@netcom.com (Henry Baker) wrote:
>
> From the anonymous reviews of a recent NSF proposal in which Lisp was
> mentioned, but only as an incidental tool:
>
> "The LISP environment is really getting out of date as a viable system
> environment. Let us not pursue this line of research any more."
>

Clint Hyde

unread,
Jun 6, 1995, 3:00:00ā€ÆAM6/6/95
to
I had a funny epiphany earlier today. I was at a meeting in which
someone was presenting a C++ class library he had developed, which
included such things a List, Hash-table, string, and some other basic
stuff.

also presented was a "vocabulary", which is a complex object cluster
featuring a hash-table of words, a special 'constructor' function which
allows you to create and load a vocabulary from a file. if you call this
constructor more than once, it doesn't reload the file into a new
vocabulary, but gives you the existing one (because vocabs can get VERY
large), and increments the "reference counter" to this object.

"REFERENCE COUNTER" ??? I asked. and yes, it does what you think it
does.

it then dawned on me that I have been wrong all along about C++.

C++ DOES have a garbage collector, and here I have been saying for years
it didn't. it's ME! I'm the garbage collector if I write C++. and all
I have to do is reference counting.

hahahahahaha !

-- clint

isn't NSF one of those things the Republicans want to cut off at the knees?


Ken Anderson

unread,
Jun 7, 1995, 3:00:00ā€ÆAM6/7/95
to
In article <3qvuft$f...@engr.orst.edu> atw...@ada.CS.ORST.EDU (John Atwood) writes:

the number 3 comes from the Garnet/Amulet development team.
In the Garnet FAQ
(http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ),
they answer the question, Why switch to C++, they list political &
technical reasons. One technical reason:

* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
but it was still pretty slow on "conventional" machines. The initial

I suspect they spent 5 years developing and some of that optimizing. Their
change log only mentions optimization 20 times.

The mention of "conventional" machines suggests that Garnet may have
started on Lisp machines. When porting to a conventional machine you need
to use declarations wisely, and make other changes to your code. A quick
grep through the Garnet sources suggets that there is plenty of room for
improvement. For example:

1. Of the 1201 declarations there is only 1 fixnum declaration even though
i suspect a graphics application does a lot of fixnum arithmetic (there are
at last 2500 uses of "(+ ", for example.

2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
jokes.

3. There are integer declarations, but they are likely to be useless.

Profiling is required to show what optimizations are important, but this
suggests there is plenty of room for improvement.

version of the C++ version, with similar functionality, appears to be
about THREE TIMES FASTER than the current Lisp version without any
tuning at all.

It would be interesting to see how they got similar functionality in C++.
For example, it looks like the kr language (ie, kr-send, g-value) makes
heavy use of binding special variables, which is not a fast operation.
They probably did that some other way in C++. Perhaps the Lisp version
should take advantage of this too.

The C++ code is now available (Amulet alpha 0.2) at:
http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

Thanks, i'll take a look.

Steve Haflich

unread,
Jun 7, 1995, 3:00:00ā€ÆAM6/7/95
to
In article <aldersonD...@netcom.com> alde...@netcom.com (Richard M. Alderson III) writes:

In article <3r20a2$m...@Yost.com> yo...@Yost.com (Dave Yost) writes:

>Perhaps someone could do a survey people who still remember what they went
>through to transition from C to Lisp and collect a list of things they found
>difficult or hard to get used to.

I would love to help, but C did not yet exist when many of us made the
transition from C to Lisp. I do believe knowledge of Lisp did not
hinder my acquisition of C.

Or whatever language(s) you knew prior to learning Lisp; some of us still don't
know C well enough to generate it, though we may be able to read it.

C aside, I have still not been able to learn C++ well enough to
generate it. The fundamental concepts underlying C++ cleverly avoid
any number of fundamental non-problems, but the bizarre lexography
mostly keeps that cleverness hidden.

Fernando Mato Mira

unread,
Jun 8, 1995, 3:00:00ā€ÆAM6/8/95
to
In article <KANDERSO.9...@bitburg.bbn.com>, kand...@bitburg.bbn.com (Ken Anderson) writes:

> >The C++ code is now available (Amulet alpha 0.2) at:
> >http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html
>

> Just for note, I had a look at the docs and ran some code and have to
> say this is a nice toolkit, powerful and easy to understand.

Didn't have that luck. I tried to compile with SGI's CC, but it doesn't work (GCC is not an option):

amulet/src/gem/gemX_styles.cc", line 174: error(3390):
more than one instance of constructor "Am_Style::Am_Style" matches
the argument list:
function "Am_Style::Am_Style(float, float, float, short,
Am_Line_Cap_Style_Flag, Am_Join_Style_Flag,
Am_Line_Solid_Flag, const char *, int,
Am_Fill_Solid_Flag, Am_Fill_Poly_Flag, Am_Image_Array)"
function "Am_Style::Am_Style(const char *, short,
Am_Line_Cap_Style_Flag, Am_Join_Style_Flag,
Am_Line_Solid_Flag, const char *, int,
Am_Fill_Solid_Flag, Am_Fill_Poly_Flag, Am_Image_Array)"
Am_Style style (0, 0, 0, thickness);

BTW, is the Motif look and feel simulated, or it does use Xm widgets?

PS:
Some trivial fixes for those with the time to look into this:

Makefile.vars.CC.SGI:
FLAGS = -I$(AMULET_DIR)/include -DNEED_BOOL -DNEED_BSTRING -DNEED_UNISTD -DDEBUG

gemX.h:
#ifdef NEED_BSTRING
#include <bstring.h>
#endif

gemX_windows.cc:
#ifdef NEED_UNISTD
#include <unistd.h>
#endif

--
F.D. Mato Mira http://ligwww.epfl.ch/matomira.html
Computer Graphics Lab mato...@epfl.ch
EPFL FAX: +41 (21) 693-5328


Martin Cracauer

unread,
Jun 8, 1995, 3:00:00ā€ÆAM6/8/95
to
atw...@ada.CS.ORST.EDU (John Atwood) writes:

>In the Garnet FAQ

>* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
>but it was still pretty slow on "conventional" machines. The initial

>version of the C++ version, with similar functionality, appears to be
>about THREE TIMES FASTER than the current Lisp version without any
>tuning at all.

I don't think that is a meaningful number to compare the speed of
Common Lisp and C++ in general. Amulet is the second system and has
probably a cleaner and tighter implementation.

Additionally, in some places C++ *requires* faster coding techniques
where a Lisp solution may be more elegant. In Amulet, formulars are
mapped to ordinary functions in constant space. This is ugly and the
Lisp version was more elegant -but slower- in this regard.

>The C++ code is now available (Amulet alpha 0.2) at:
>http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

Just for note, I had a look at the docs and ran some code and have to
say this is a nice toolkit, powerful and easy to understand.

Congratulations.

Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@wavehh.hanse.de>. No NeXTMail, please.
Norderstedt/Hamburg, Germany. Fax +49 40 522 85 36. This is a
private address. At (netless) work programming in data analysis.

Ken Anderson

unread,
Jun 8, 1995, 3:00:00ā€ÆAM6/8/95
to
In article <1995Jun8.0...@wavehh.hanse.de> crac...@wavehh.hanse.de (Martin Cracauer) writes:

atw...@ada.CS.ORST.EDU (John Atwood) writes:

>In the Garnet FAQ
>* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
>but it was still pretty slow on "conventional" machines. The initial
>version of the C++ version, with similar functionality, appears to be
>about THREE TIMES FASTER than the current Lisp version without any
>tuning at all.

I don't think that is a meaningful number to compare the speed of
Common Lisp and C++ in general. Amulet is the second system and has
probably a cleaner and tighter implementation.

It certainly isn't meaningful in this case, since the Lisp version has no
useful declarations that i could see. To get meaningful numbers you need
to do a careful analysis. If you port something from Lisp to C++, you
should also port it back to Lisp. The other problem is that when people
see a slow Lisp application it is easy to blame the language,

Additionally, in some places C++ *requires* faster coding techniques
where a Lisp solution may be more elegant. In Amulet, formulars are
mapped to ordinary functions in constant space. This is ugly and the
Lisp version was more elegant -but slower- in this regard.

Possibly. It is easy for people to believe that every high level feature
in Lisp has a negative performance impact. That need not be the case.

>The C++ code is now available (Amulet alpha 0.2) at:
>http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

Just for note, I had a look at the docs and ran some code and have to
say this is a nice toolkit, powerful and easy to understand.
Congratulations.

--

Rich Parker

unread,
Jun 8, 1995, 3:00:00ā€ÆAM6/8/95
to
mu...@cs.rochester.edu (Bulent Murtezaoglu) wrote:
>Hmmm. Off the top of my head I'd say Emacs, AutoCAD, and symbolic math
>systems with Lispy engines inside. Sure, MS Word and Excel aren't written
>in Lisp, and they sell well and and serve their intended purpose but when
>you try to do anything "unusual" with them you realize how crippled they
>really are underneath that polished look. On the language vendor side, both
>Franz and Harlequin seem to be doing well and according to the rumors on the
>net it wasn't the Lisp business that brought about Lucid's demise.

Also, Interleaf Publisher has a Lispy engine, to my knowledge. For a while they
were a popular multiple platform page layout program, but FrameMaker has
largely taken over Interleaf's market and Frame is written in MPW C & C++.

A smaller footprint compiled Lisp (without eval) is really needed, in order to
compete with modern development environments. The product should also feature
an interface builder for every platform that it supports (at _least_ the Mac)
<g>.

I would also look forward to a _real_ application framework, such as the C++
modernly provide.

FWIW,

-rich-

55437-olivier clarisse(haim)463

unread,
Jun 8, 1995, 3:00:00ā€ÆAM6/8/95
to

The case of AMULET versus GARNET is becoming quite entertaining.
Anyone knowledgeable about these systems would care to comment?

In article <KANDERSO.9...@bitburg.bbn.com>, kand...@bitburg.bbn.com (Ken Anderson) writes:

|> In article <3qvuft$f...@engr.orst.edu> atw...@ada.CS.ORST.EDU (John Atwood) writes:
|>
|> I suspect they spent 5 years developing and some of that optimizing. Their
|> change log only mentions optimization 20 times.

1. What percentage of the total GARNET project resources were spent
optimizing (for speed) versus discovering and implementing
new concepts of constraint based VP for example?

2. What tools where used to do code profiling, metering on CMU-CL
for GARNET?

3. Were courses taught on CL optimization and programming CL
for speed at CMU during the GARNET project? etc.

|>[...] A quick


|> grep through the Garnet sources suggets that there is plenty of room for
|> improvement. For example:
|>
|> 1. Of the 1201 declarations there is only 1 fixnum declaration even though
|> i suspect a graphics application does a lot of fixnum arithmetic (there are
|> at last 2500 uses of "(+ ", for example.
|>
|> 2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
|> jokes.
|>
|> 3. There are integer declarations, but they are likely to be useless.
|>
|> Profiling is required to show what optimizations are important, but this
|> suggests there is plenty of room for improvement.
|>

I agree, and unless developers from the orginal GARNET team can comment
to counter or confirm the above. These observations indicate
that GARNET speed performance optimization effort was a *joke*.
This suffices to justify a rewrite of GARNET in whatever language
the next team of developers is more familiar with.

I suppose the WEB pages on GARNET and AMULET will soon to
be updated to correct the misinformation they are causing
(right ?).


|>
|> version of the C++ version, with similar functionality, appears to be
|> about THREE TIMES FASTER than the current Lisp version without any
|> tuning at all.
|>

It changes the conclusions I would draw from this last statement:
"The GARNET project used Lisp to discover new principles of
GUI and VP that would have been very hard to conceptualize using other
environments [at the time], now a new project called AMULET is using
these results to reimplement a product in C++ based on these findings."
This is perfecly fine and is a great way to get project funding at this time.

Do we see a trend elsewhere in this industry where good new ideas have
often emerged from Lisp projects to be later productized in C and C++?

Perhaps if Lisp and CLOS were taught with clear emphasis on performance
optimization tools and techniques, productizing in another language
would not be necessary. But then again, rewriting is always so
much easier (and rewarding to the rewriters) than inventing...

--
----------------
Olivier Clarisse "Languages are not unlike living organisms
Member of Technical Staff can they adapt and improve to survive?"
AT&T Bell Laboratories

E. Handelman

unread,
Jun 9, 1995, 3:00:00ā€ÆAM6/9/95
to

yo...@Yost.com (Dave Yost) writes:


>Perhaps someone could do a survey people who still remember what they went
>through to transition from C to Lisp and collect a list of things they found
>difficult or hard to get used to.

How's about the other way around? I've written huge incomprehensible
self-mutating lisp programs that do nothing and yet I can't figure out the
first thing about C. What boggles the mind is that c programs, as I
understand it, are actually intended to do things. Am I alone in
being unable to grasp this idea?

Clint Hyde

unread,
Jun 9, 1995, 3:00:00ā€ÆAM6/9/95
to
In article <eliot-09069...@slip53.dialup.mcgill.ca> el...@sunrise.cc.mcgill.ca (E. Handelman) writes:

--> I've written huge incomprehensible
--> self-mutating lisp programs that do nothing

this is an amazing statement. what are we supposed to infer from it?

I wouldn't call it a good claim to fame, or suggest that it clearly
indicates why you should therefore be good at C.

--> and yet I can't figure out the first thing about C.

-- clint


Stuart Watt

unread,
Jun 14, 1995, 3:00:00ā€ÆAM6/14/95
to
In article <3r5ads$6...@news.aero.org>, do...@aero.org (John Doner) wrote:

> Debugging: Most Lisps have pretty serviceable debugging tools. But
> what if your code breaks something at a low level? It's pretty
> easy to tell what piece of C source some assembly code corresponds
> to, but not so for Lisp. And there are other difficult situations:
> I've been scratching my head for a while over one where the
> compiler chokes on code produced by several layers of interacting
> macros. It is bewildering trying to figure out where this code
> originally came from!

IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
Machine days. There are at least two people who have looked at proper
source level debugging recently, and as I am one of them, I would really
like implementors to wake up on this.I get really fed up about the
retrogressive environments which are still using framed windows and,
basically, still trying to be Lisp Machines. Don't get me wrong: the Lisp
Machines were great for the early 1980s and are still a lot better than
most Lisp environments today. That in itself shows the stasis in the
field. We could do a lot better if we tried.

But the language debate is mostly cultural, not technical, a point
forcefully made by Richard Gabriel in his "Good News, Bad News, How to Win
Big" paper in 1990. It doesn't really matter which language is *better*,
because that isn't the most important criterion. There are already Lisp's
which generate '.o' files, which are almost as fast as optimised C code,
and which have great environments. Does this make any difference?
Apparently not. Lisp will survive because, for many of us, it is a faster
and better environment, and that is enough. Trying to persuade others to
use it is completely ineffectual in my experience. It is better (and
sometimes a lot more fun) just to sit back and watch them taking six
months to build systems we can in six weeks. I think we might as well give
up being evangelists until we've something new to say, and just get on
with making Lisp better in the meantime.

Regards, Stuart

--
Stuart Watt; Human Cognition Research Laboratory, Open University,
Walton Hall, Milton Keynes, MK7 6AA, UK.
Tel: +44 1908 654513; Fax: +44 1908 653169.
WWW: http://kmi.open.ac.uk/~stuart/stuart.html

Chris Reedy

unread,
Jun 14, 1995, 3:00:00ā€ÆAM6/14/95
to
In article <SMH.95Ju...@vapor.Franz.COM>, s...@Franz.COM (Steve
Haflich) wrote:

> In article <aldersonD...@netcom.com> alde...@netcom.com (Richard
M. Alderson III) writes:
>

> In article <3r20a2$m...@Yost.com> yo...@Yost.com (Dave Yost) writes:
>
> >Perhaps someone could do a survey people who still remember what they went
> >through to transition from C to Lisp and collect a list of things
they found
> >difficult or hard to get used to.
>

> I would love to help, but C did not yet exist when many of us made the
> transition from C to Lisp. I do believe knowledge of Lisp did not
> hinder my acquisition of C.
>
> Or whatever language(s) you knew prior to learning Lisp; some of us
still don't
> know C well enough to generate it, though we may be able to read it.
>

As someone who knows a fairly large number of programming languages (Lisp,
C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and who
knew most of them before coming to Lisp, here, off the top of my head, are
some of the things I found myself stumbling over.

1. The size of the language.

Yes, just about everything I've ever wanted is there. The problem is
that, even after having read through Steele, I find myself forgetting that
the language has certain capabilities. I frequently find myself having to
look up how to use those capabilities.

If I was trying to teach this as an initial programming language, I would
definitely want to find a small subset that illustrates the key concepts.

2. CLOS and Generic Functions

Interestingly enough, the generic function approach to objects took some
getting used to, primarily due to lack of prior examples (neither C++ of
Smalltalk does it this way). In the long run I am happier with this
approach than the others, especially since it solves one of the pains of
C++ which is that I can't (in C++) create methods on built-in types.

3. Garbage collection

A person coming from C/C++ has to break themselves of the habit of
worrying about "ownership" concepts, since ownership is the usual approach
to keeping your sanity in a programming language where you have to perform
explicit deallocation. Lisp code that embodies this concept looks rather
ugly.

P.S. I have anecdotal evidence that this is one of the major pains
associated with trying to convert a Lisp application to C/C++.

4. Multiple ways to do the same thing

For example:
car/cdr versus first/rest
lists versus vectors versus structures versus classes
setf versus setq

And in some cases the (almost) same way to do very different things, for
example:

(setq x y) versus (setq x 'y)

This may also be a partial reiteration of number 1.

5. Macros

In my opinion this is one of the most important capabilities in Lisp.
When I am using Lisp I want to be able to define "my own language" and use
it. However, backquote notation takes a little getting used to and is you
have to think about the fact that a macro is a function returning "source
code" for a while before the beauty of the concept sinks in.

6. The lack of separation between compile, load, and execute

With most other languages (Smalltalk excepted), these are separate steps.
The fact that which modules are loaded when I attempt a compilation can
significantly impact the results of that compilation is something I'm
still not comfortable with (even though I use it to my advantage, see the
prior comment), especially since ...

7. Lack of a standard defsystem and dependency analysis tools

When I'm doing C/C++ I have make utilities. For the more advanced users
there are dependency analysis tools which generate the make dependencies
(the most error prone part) for you. The more advanced environments on
PCs and Macs provide this capability as a part of the environment so I
don't even have to worry about it. I still don't know the right way to go
about organizing a large lisp system.

> C aside, I have still not been able to learn C++ well enough to
> generate it. The fundamental concepts underlying C++ cleverly avoid
> any number of fundamental non-problems, but the bizarre lexography
> mostly keeps that cleverness hidden.

I do generate C++. It's an ugly language. After I (finally, really)
understood instance initialization in C++, I decided I definitely prefer
Lisp. The problem with instance initialization in C++ is that it is easy
to do the easy examples and a major pain in the *** to do the complex
examples.

Hope that helps.

Chris

The above opinions are my own and not MITRE's.
Chris Reedy, Open Systems Center, Z667
The MITRE Corporation, 7525 Colshire Drive, McLean, VA 22102-3481
Email: cre...@mitre.org Phone: (703) 883-7183 FAX: (703) 883-6991

Clint Hyde

unread,
Jun 14, 1995, 3:00:00ā€ÆAM6/14/95
to
In article <S.N.K.Watt-14...@uu-stuart-mac.open.ac.uk> S.N.K...@open.ac.uk (Stuart Watt) writes:

-->
--> IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
--> Machine days. There are at least two people who have looked at proper
--> source level debugging recently, and as I am one of them, I would really
--> like implementors to wake up on this.

rather than moan about things we can't change, how about telling us
about this--I'd certainly like to hear what you've done that's new and
different.

somewhere around here I have a small chunk of code written for Explorer
(and probably functional on symbolics also) that did an excellent job of
opening the call-stack when you were in the debugger and letting you get
at all sorts of things quite easily. unfortunately, I haven't used it
for 5 years, and don't know where it is--I don't even remember the name
of it now :(

it greatly improved one's ability to do "source-level-debugging"

-- clint


Brad Myers

unread,
Jun 15, 1995, 3:00:00ā€ÆAM6/15/95
to
It has just come to my attention that a discussion of Garnet and
Amulet was going on here. I no longer follow these newsgroups, so I
was not aware of it.

The Garnet group put a significant emphasis on performance and spent
many man-months optimizing the code using all available performance
measuring tools. Unlike many of the usual benchmark comparisons of C
and Lisp, Type declarations showed no noticeable performance
improvements for Garnet. The biggest performance gains resulted from,
of course, changing algorithms. The same (final) algorithms are
mostly being used in Amulet. My biggest complaints about performance
in Lisp are that:

1) It is generally impossible to tell which Lisp functions and features
will be slow and which will be fast, and furthermore it differs
enormously among compilers.
2) Many of the standard functions cause consing.
3) The syntax for declarations is really awful and it is very
easy to miss some of the places you need to put them.

In C++ we find that what seems like it will be efficient usually is,
and vice versa, and the "obvious" way to code an algorithm usually
results in quite efficient code, which is certainly not true in Lisp.

While our coding productivity is probably down somewhat in C++, I feel that
since we don't have to go back and spend time optimizing the code as
much, we might end up with about the same overall time to deliver a
system with a useful level of performance.

The list of reasons we switched from Lisp to C++ are number [12] in:
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ

By the way, to answer one question, we have been able to retain many of
Garnet's dynamic features in C++, including dynamic slot typing and
generic Get and Set functions, using some of the advanced overloading
capabilities of C++. Amulet is now in alpha release because the
documentation is not quite done. We expect a real release by the end
of the month. See http://www.cs.cmu.edu/~amulet for more information.

Brad A. Myers
Computer Science Department
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213-3891
(412) 268-5150
FAX: (412) 268-5576
b...@cs.cmu.edu
http://www.cs.cmu.edu/~bam


Kelly Murray

unread,
Jun 15, 1995, 3:00:00ā€ÆAM6/15/95
to
In article <3rn4kr$1...@info-server.bbn.com>, Clint Hyde <ch...@bbn.com> writes:
|> In article <S.N.K.Watt-14...@uu-stuart-mac.open.ac.uk> S.N.K...@open.ac.uk (Stuart Watt) writes:
|>
|> -->
|> --> IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
|> --> Machine days. There are at least two people who have looked at proper
|> --> source level debugging recently, and as I am one of them, I would really
|> --> like implementors to wake up on this.
|>
|> rather than moan about things we can't change, how about telling us
|> about this--I'd certainly like to hear what you've done that's new and
|> different.

I developed a source-level stepper that displayed Lisp source in a window,
and let you trace/step through it, highlighting the pending forms, and
showing return values. All forms and values were mouse sensitive, you
could mouse a section a function call form and click to turn on tracing,
or have it conditionally stop when that form was called or returned from.
It was quite slick and powerful. It was limited to interpreted code,
but you could VTRACE a compiled function if it could find the source code
for it, calling the compiled version when tracing was off (conditionally too).

Macros are a problem for tracing Lisp source, because the source form
isn't what gets executed. You have the same issue in C using #define's,
but unlike C, I COULD trace any subforms of a CL macro that actually were
executed, so a (with-open-file (stream ...) (read-line stream ...) ..)
the read-line would get stepped through. You can optional expand macros too,
or write a corresponding "visual macro" that could vtrace subforms.

(unfortunately, VTRACE is held hostage by the equivalent of the Serbs, so
don't ask if you can use it)

While it was very useful in some situations -- I thought it was perfect
for beginners, as well as trying to understand someone else's code ---
Lisp isn't like C, where you
have to compile and run the whole program before debugging smaller pieces.
Lisp tends to have smaller functions that are easier to understand and fix,
and are usually much more side-effect-free and thus can be debugged
without running in a specific context.
So the need for such fancy source-level debugging is much less.

-Kelly Murray (k...@franz.com) <a href="http://www.franz.com"> Franz Inc. </a>
"Those who can see the invisible can do the impossible" - Carl Mays


John Atwood

unread,
Jun 15, 1995, 3:00:00ā€ÆAM6/15/95
to
Regarding the effiency of Lisp expressions,

Ken Anderson <kand...@bitburg.bbn.com> wrote:
>
>2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
>jokes.
>

I'll bite. What would be the proper way to code this?

Ken Anderson

unread,
Jun 19, 1995, 3:00:00ā€ÆAM6/19/95
to
In article <3rq9nb$r...@cantaloupe.srv.cs.cmu.edu> ba...@cs.cmu.edu (Brad Myers) writes:

Sorry if this is a duplicate, it looks like my initial reply didn't get out.

It has just come to my attention that a discussion of Garnet and
Amulet was going on here. I no longer follow these newsgroups, so I
was not aware of it.

The Garnet group put a significant emphasis on performance and spent
many man-months optimizing the code using all available performance
measuring tools. Unlike many of the usual benchmark comparisons of C
and Lisp, Type declarations showed no noticeable performance

Unfortunately a brief (about 2 hours) look at Garntet suggets that type
declarations were either

1. Missing. For example, there are no fixnum declarations in any
arithmetic or array declarations for aref's (although there were some
svref's).

2. or mostly useless. For example, a high level function declares an
argument to be a string and passes the argument to lower level functions
that have no type information.

This suggests that there is plenty of room for further performance
improvement.

improvements for Garnet. The biggest performance gains resulted from,
of course, changing algorithms. The same (final) algorithms are
mostly being used in Amulet. My biggest complaints about performance
in Lisp are that:


1) It is generally impossible to tell which Lisp functions and features
will be slow and which will be fast, and furthermore it differs
enormously among compilers.
2) Many of the standard functions cause consing.

I presume you mean "cause unnecessary consing". I don't know which
functions you are refering to, but using proper declarations can reduce
number consing. I also avoid functions like read-line and string-trim in
critical places in the code.

3) The syntax for declarations is really awful and it is very
easy to miss some of the places you need to put them.

These are all fair comments. However, to compensate for these problems
Lisp compilers provide advice about declarations that would improve
performance. CMUCL is particularly good in this regard. There are also
some good profilers which i find extremely effective at identifying where
optimization effort should go. A day or so of profiling every once and a
while should be all you need. Did you use such tools?

In C++ we find that what seems like it will be efficient usually is,
and vice versa, and the "obvious" way to code an algorithm usually
results in quite efficient code, which is certainly not true in Lisp.

While our coding productivity is probably down somewhat in C++, I feel that
since we don't have to go back and spend time optimizing the code as
much, we might end up with about the same overall time to deliver a
system with a useful level of performance.

People sell profilers for C++ too, so i suspect optimization in C++ isn't
completely obvious.

The list of reasons we switched from Lisp to C++ are number [12] in:
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ

While the reasons are clearly stated, i think the following one on speed is
misleading:

"
* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
but it was still pretty slow on "conventional" machines. The initial

version of the C++ version, with similar functionality, appears to be
about THREE TIMES FASTER than the current Lisp version without any
tuning at all.
"

As you pointed out, much of that 5 years would have been spent making
optimzations you would have made in C++ as well, ie algorithm changes.
Since undeclared arithmetic is 10 times slower than declared arithmetic, it
is unfair to say that the C++ version had "similar functionality" unless
you implemented the same arithemtic mode in C++.

Since Amulet and Garnet are sizable chunks of code. It would be quite
valuable if you could make your "about THREE TIMES FASTER" benchmark
available to us.

By the way, to answer one question, we have been able to retain many of
Garnet's dynamic features in C++, including dynamic slot typing and
generic Get and Set functions, using some of the advanced overloading
capabilities of C++. Amulet is now in alpha release because the
documentation is not quite done. We expect a real release by the end
of the month. See http://www.cs.cmu.edu/~amulet for more information.

If so, then perhaps profiling and optimizing Garnet would be a good thing
for Amulet.

k

Ken Anderson

unread,
Jun 19, 1995, 3:00:00ā€ÆAM6/19/95
to
In article <3rqc23$i...@engr.orst.edu> atw...@ada.CS.ORST.EDU (John Atwood) writes:

Regarding the effiency of Lisp expressions,
Ken Anderson <kand...@bitburg.bbn.com> wrote:
>
>2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
>jokes.
>

I'll bite. What would be the proper way to code this?

This is an example where Lisp provides a correct result, but with a perhaps
unexpected performance hit. The performance depends on the type of the
input arguments, fixed-width and comp-width. Lets assume they are fixnums
and that their difference is a fixnum. Unfortunately, the (/ ... 2)
can produce a ratio which is then converted into a fixnum by floor. This
is a common mistake because in C, / of two int's produces an int (by
truncation?). The right way to do this in Lisp is to use the two argument
version of floor. The following table shows that there can be a factor of
7 difference in performance for the Lisp on my desk:

(defun f1 (a b) (floor (/ (- a b) 2)))
(defun f2 (a b) (floor (- a b) 2))
(defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))

What time (microsec) +/-
(f1 20 5) 34.36351 0.41772142
(f1 20 6) 5.7358184 0.20296581
(f2 20 5) 4.845907 0.6062079
(f2 20 6) 4.83785 0.6236771
(f3 20 5) 4.760563 0.20169255
(f3 20 6) 4.747239 0.0

Fernando Mato Mira

unread,
Jun 20, 1995, 3:00:00ā€ÆAM6/20/95
to
In article <KANDERSO.95...@bitburg.bbn.com>, kand...@bitburg.bbn.com (Ken Anderson) writes:

> 1. Missing. For example, there are no fixnum declarations in any
> arithmetic or array declarations for aref's (although there were some
> svref's).

And SVREFs are not a good idea, as the element-type of simple vectors is T.
It's better to declare the arrays as one dimensional simple arrays with a certain element type
and use AREF (I remember Allegro would open code AREFs but not SVREFs).

Martin Brundage

unread,
Jun 20, 1995, 3:00:00ā€ÆAM6/20/95
to
In <eliot-09069...@slip53.dialup.mcgill.ca>
el...@sunrise.cc.mcgill.ca (E. Handelman) writes:
>yo...@Yost.com (Dave Yost) writes:
>>Perhaps someone could do a survey people who still remember what they
went
>>through to transition from C to Lisp and collect a list of things they
found
>>difficult or hard to get used to.
>How's about the other way around? I've written huge incomprehensible
>self-mutating lisp programs that do nothing and yet I can't figure out
the

>first thing about C. What boggles the mind is that c programs, as I
>understand it, are actually intended to do things. Am I alone in
>being unable to grasp this idea?

This is a very interesting comment I seem to have heard more than once.
Based in my exposure to both C and Lisp, I think maybe the explanation
for this phenomenon (i.e., skilled Lisp programmers have difficult
grasping C, at least as an applications language) is the level of
insulation from the hardware provided by Lisp, due to its high level of
abstraction. C, in contrast, has a very low level of abstraction, being
little more than a portable assembler. C would be most comprehensible to
programmers programming "down to the metal" and intermixing C and
assembler, because it follows the hardware so closely.

As a hardware engineer familiar with assembler programming, my
experience with "learning C" was that I really didn't have to learn it:
I just started using it, with an occasional reference to a manual for
things like syntax. Likewise, I found "learning Lisp" to be equally
intuitive and obvious, probably because of my previous exposure to
high-level languages and C. I strongly suspect that doing the process
backwards, i.e., going in reverse of the levels of abstraction, would
not only be extremely difficult, but painful as well.

For programmers to say "Lisp is hard" and "C is easy" reminds me of the
occasional programmer (usually an amateur) who swears that assembler is
the best and easiest way to go, regardless of the size of the program!
Yes, assembler's easier from the standpoint of the simplicity of the
programming language, but that's as far as it goes. I think this is
equivalent to the discussion of the difficulty of C V. Lisp, raised one
level of abstraction. It's fascinating to speculate that the popularity
(if not the very existence) of C++ may be based on the fallacy of C
being "easier" than Lisp, in view of the possible basis for this belief.

--
Marty
mar...@ix.netcom.com
mar...@datum.com
Datum Inc, Bancomm Div. "Masters of Time"

Kelly Murray

unread,
Jun 20, 1995, 3:00:00ā€ÆAM6/20/95
to
In article <KANDERSO.95...@bitburg.bbn.com>, kand...@bitburg.bbn.com (Ken Anderson) writes:
>> In article <3rqc23$i...@engr.orst.edu> atw...@ada.CS.ORST.EDU (John Atwood) writes:
>>
>> Regarding the effiency of Lisp expressions,
>> Ken Anderson <kand...@bitburg.bbn.com> wrote:
>> >2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
>> >jokes.
>> I'll bite. What would be the proper way to code this?
>> This is an example where Lisp provides a correct result, but with a perhaps
>> unexpected performance hit. The performance depends on the type of the
>> input arguments, fixed-width and comp-width. Lets assume they are fixnums
>> and that their difference is a fixnum. Unfortunately, the (/ ... 2)
>> can produce a ratio which is then converted into a fixnum by floor. This
>> is a common mistake because in C, / of two int's produces an int (by
>> truncation?). The right way to do this in Lisp is to use the two argument
>> version of floor. The following table shows that there can be a factor of
>> 7 difference in performance for the Lisp on my desk:
>> (defun f1 (a b) (floor (/ (- a b) 2)))
>> (defun f2 (a b) (floor (- a b) 2))
>> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))

Good analysis, but the fastest method is to use arithmetic shift instead of floor:

(defun f4 (a b) (ash (the fixnum (- a b)) -1)))

At least in AllegroCL on the SPARC, this compiles into a few inline assembly instructions:

0: save #x-68,%o6
4: tsubcc %i1,%i0,%o4 ;; do the subtract
8: bvs,a 48 ;; branch out if they were not fixnums
12: move.l %i0,%o0
lb1:
16: asr.l #x1,%o4 ;; do the shift by 1 to divide
20: move.l #x3,%o3
24: andn %o3,%o4 ;; patch back type
28: move.l %o4,%o0
32: move.l #x1,%g3
36: jmpl 8(%i7),%g0 ;; return from function
40: restore %g0,%o0
44: move.l %i0,%o0

Dave Yost

unread,
Jun 21, 1995, 3:00:00ā€ÆAM6/21/95
to
In article <S.N.K.Watt-14...@uu-stuart-mac.open.ac.uk> S.N.K...@open.ac.uk (Stuart Watt) writes:
>
> IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
> Machine days. There are at least two people who have looked at proper
> source level debugging recently, and as I am one of them, I would really
> like implementors to wake up on this.

Anyone in a position to improve lisp's debugging tools
*must* see this:

http://lieber.www.media.mit.edu/people/lieber/Lieberary/ZStep/ZStep.html

It's a paper (with QuickTime animation) given at CHI '95
on how to greatly improve the programming user experience.
Do check it out.

Dave

Richard M. Alderson III

unread,
Jun 21, 1995, 3:00:00ā€ÆAM6/21/95
to
In article <1995Jun20....@franz.com> k...@math.ufl.edu (Kelly Murray)
writes:

>Good analysis, but the fastest method is to use arithmetic shift instead of
>floor:

>(defun f4 (a b) (ash (the fixnum (- a b)) -1)))

AARRRRRRRGGGGGGGGGGHHHHHHHHHHHHHHHHHHHHHHHHHHHHH!

To quote from the PDP-10 hardware reference manual (OK, technically, this
edition is the _DECsystem-10/DECSYSTEM-20 Processor Reference Manual, 1982):

An arithmetic right shift truncates a negative result differently
from IDIV if 1s are shifted out. The result of the shift is more
negative by 1 than the quotient of IDIV. Hence shifting -1 (all
1s) gives -1 as a result.
[page 2-41]

I checked HAKMEM, since I thought this was in there, but it's not: I seem to
recall that Steele or one of the other MIT hackers did a short report on this
in the late 60's, since people were in the habit of encoding division by 2 in
this very way.

Does anyone remember when, and who, first commited this to writing?

David Neves

unread,
Jun 21, 1995, 3:00:00ā€ÆAM6/21/95
to
In article <1995Jun20....@franz.com>, k...@math.ufl.edu (Kelly
Murray) wrote:

: In article <KANDERSO.95...@bitburg.bbn.com>,
kand...@bitburg.bbn.com (Ken Anderson) writes:
...
: >> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))
:
: Good analysis, but the fastest method is to use arithmetic shift


instead of floor:
:
: (defun f4 (a b) (ash (the fixnum (- a b)) -1)))

I would hope that writing "f3" would cause the compiler to compile it as
"f4". "f3" is certainly cleared to the reader. If not, the user could
also write a compiler macro to turn an "f3" into an "f4".
-David

Scott Wheeler

unread,
Jun 22, 1995, 3:00:00ā€ÆAM6/22/95
to
In Article <creedy-1406...@128.29.151.94> Chris Reedy writes:
>As someone who knows a fairly large number of programming languages
(Lisp,
>C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and
who
>knew most of them before coming to Lisp, here, off the top of my head,
are
>some of the things I found myself stumbling over.

I have a similar language background but I don't know Lisp in any
detail, although in general I try to have a reading knowledge of most
common languages in order to evaluate their strengths. I thought it
might be useful to comment why I haven't found it easy to get into.

>1. The size of the language.
>
>Yes, just about everything I've ever wanted is there. The problem is
>that, even after having read through Steele, I find myself forgetting
>that the language has certain capabilities.

Absolutely. From the outside (i.e. having read about 1/3 of Steele
and browsed the rest, with some other books), it looks like a kitchen
sink language. Probably there are some unifying concepts in there, but
they are much harder to find than in something like C++ or Eiffel.
After working out a way to do something, I'm not confident that it's
the *right* way.

>3. Garbage collection
>
>A person coming from C/C++ has to break themselves of the habit of
>worrying about "ownership" concepts, since ownership is the usual
>approach to keeping your sanity in a programming language where you
>have to perform explicit deallocation.

I find it awkward to bear in mind that something returned from a
function is the thing itself, not a copy. In practice it doesn't
usually make a difference, but it always feels unsafe. I'm left with
the uneasy feeling that if I modify a variable it could have side
effects I haven't anticipated. Not being used to GC languages, I prefer
the C++ version where you can decide whether to take a copy or a
reference to the original. The worst (i.e. hardest for me personally to
work with) version seems to be Eiffel, where copy or reference
semantics are specified on a class by class basis.

>The problem with instance initialization in C++ is that it is easy
>to do the easy examples and a major pain in the *** to do the complex
>examples.

Interesting. I'd have said that constructors and destructors in C++
(particularly the use for globals and class data members) were a major
strong point compared to most OO languages.

Ok, flamebait section. It looks to me as though most of CLOS is
designed to support CS research, and is inappropriate elsewhere because
it takes too long to learn and work out which bits you didn't need to
know about. I work on the software end of industrial research programs.
Commercial programming is generally simple if you do it right. I've got
55000 (used to be 85000) lines of C++ in one program, with only one
"clever" bit in it (implementing a dynamic constructor, i.e. making an
object who's class is decided at run-time by the data). It uses only
single inheritance (apart from the standard streams library). If I were
to implement in Lisp, I could do it in a subset of Scheme. I don't need
CL or CLOS, and I find it difficult to see for what sort of commercial
project I'd need that expressive power.

As I understand it, effort is going into further extensions of the
language, e.g. the loop construct. If it is intended that the language
should be used as a general-purpose language (and any comparison with
C++ surely implies this), these extensions seem to be heading in the
wrong direction. As a technical manager, I do make an honest attempt to
identify the "best" current language, but in the case of CL and CLOS I
pretty well have to guess at its utility because I just don't have the
time to explore it. [Please don't email back with statistics - it's not
that I don't believe them or that they're not interesting, but
partisans of any Holy Language have similar stats, and I prefer to use
my own performance on the Profane Languages as a basemark].

Anyway, as you fill your napalm tanks, please bear in mind that I
browsing the Lisp groups with an open mind. I'm not interested in
converting you to C++, I just thought it might be useful to explain why
I don't use CL.

Scott

Kelly Murray

unread,
Jun 22, 1995, 3:00:00ā€ÆAM6/22/95
to

I agree that using ash is not as clear as floor. And in fact, AllegroCL will
transform a call to (floor (the fixnum x) 2) into a right-shift
when speed > safety and the second return value from floor is ignored
(see details below.)

However, if you are writing performance critical code, (the 10% taking up 90%)
I think it's best not to rely too much on hope, especially if you want
good portability, as one compiler might not do the tranform.

In general, I think there is a tendency to expect too much from a Lisp compiler.
Maybe the original programmer thought the system could transform
the (floor (/ (- x y) 2)) into the f4 version?
No C programmer would ever expect this kind of magic from their compiler.

The Franz compiler guy (Duane Rettig) works hard to include many of
these optimizations, but you can't expect the system to transform
everything that is theoretically possible, as well as demand
the Lisp image be as small as Basic
(actually Visual Basic's footprint is bigger than AllegroCL :-)

Here's what Duane says:

"There are two reasons f3 does not expand into f4 as is; one generic
and one specific to Allegro CL (I can't speak for the other lisps):

1. (generic) The call to FLOOR is in tail position, and FLOOR returns
two values. Thus, if FLOOR is going to be inlined, it had better
give both values. ASH doesn't quite do the job. To get it to
return only the first value, enclose it in a VALUES form.

2. Allegro CL does not trust declarations, by default, unless the
speed optimization quality is higher than safety.

So with the following tweaks, f3 does essentially become f4:

user(7): (defun f3 (a b)
(declare (optimize speed) (fixnum a b))
(values (floor (the fixnum (- a b)) 2)))
f3
user(8): (compile 'f3)
f3
nil
nil
user(9): (disassemble 'f3)
;; disassembly of #<Function f3>
;; formals: a b

;; code start: #x6ea7e4:
0: save %o6, #x-68, %o6
4: cmp %g3, #x2
8: tne %g0, #x10
12: taddcctv %g0, %g1, %g0
16: sub %i0, %i1, %o4
20: sra %o4, #x3, %o4
24: sll %o4, #x2, %l0
28: mov %l0, %o0
32: mov #x1, %g3
36: jmp %i7 + 8
40: restore %o0, %g0, %o0
user(10):

Some of the other cruft in the f3 function (like arg-count and
interrupt checking) can be killed by pulling out the stops and
compiling with safety=0 and debug=0. "

-Kelly Murray k...@franz.com <a "href=http://www.franz.com"> Franz Inc. </a>


Pierpaolo Bernardi

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
David Neves (ne...@ils.nwu.edu) wrote:
: In article <1995Jun20....@franz.com>, k...@math.ufl.edu (Kelly
: Murray) wrote:

: : In article <KANDERSO.95...@bitburg.bbn.com>,
: kand...@bitburg.bbn.com (Ken Anderson) writes:
: ...
: : >> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))
: :
: : Good analysis, but the fastest method is to use arithmetic shift
: instead of floor:
: :
: : (defun f4 (a b) (ash (the fixnum (- a b)) -1)))
: I would hope that writing "f3" would cause the compiler to compile it as
: "f4". "f3" is certainly cleared to the reader. If not, the user could
: also write a compiler macro to turn an "f3" into an "f4".
: -David

Since f3 and f4 are not equivalent, I doubt any compiler would do this.
floor returns two values, f3 should cut the second one to be equivalent
to f4.

Pierpaolo


Holger Duerer

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
>>>>> On Thu, 22 Jun 1995 20:35:46 GMT, k...@franz.com (Kelly Murray) said:
Murray> [...]

Murray> In general, I think there is a tendency to expect too much from a Lisp compiler.
Murray> Maybe the original programmer thought the system could transform
Murray> the (floor (/ (- x y) 2)) into the f4 version?
Murray> No C programmer would ever expect this kind of magic from their compiler.

Well, it's not a fair comparison since (as your later comments
explained) no C-construct does as much as the Lisp one (i.e. no
multiple values return, no safety assumptions).

Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
my compiler to genrerate the same code as (x-y)>>1, at least for
optimized compilations. This is no big demand on the compiler really.
(I just checked and my gcc does it even without any optimization flags
turned on.)

For the same reason I would also expect a Lisp compiler to do the same
(i.e. with speed opt. set and when it can be guaranteed that only the
first value ist used).

Lisp is supposed to be for easy prototyping. Code like (ash <expr> -1)
does not belong in such a language (unless you really *mean*
shifting and not arithmetic).

Holger
--
------------------------------------------------------------------------------
Holger D"urer Tel.: ++49 421 218-2452
Universit"at Bremen Fax.: ++49 421 218-2720
Zentrum f. Kognitionswissenschaften und
FB 3 -- Informatik
Postfach 330 440 <Holger...@PC-Labor.Uni-Bremen.DE>
D - 28334 Bremen <http://www.uni-bremen.de/Duerer/Holger.html>

Kevin Gallagher

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
Holger Duerer writes:
>Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
>my compiler to generate the same code as (x-y)>>1, at least for

>optimized compilations. This is no big demand on the compiler really.
>(I just checked and my gcc does it even without any optimization flags
>turned on.)

So, what would you expect to be the value of (10 - -17) / 2 ?
-3, -4, -3.5 (hah!) -- or one of -3 or -4 depending on the compiler?

C has choosen efficiency over accuracy. This is a perfectly
reasonable design decision, entirely within the C tradition. Good C
programmers know this; bad ones just have inexplicible, intermittent
bugs in their programs.

Common Lisp has choosen accuracy over efficiency while, at the same
time, giving you the tools to get efficiency when you say what you
want.

Kevin Gallagher

Thomas A. Russ

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
In article <...> ho...@random.pc-labor.uni-bremen.de (Holger Duerer) writes:
> Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
> my compiler to genrerate the same code as (x-y)>>1, at least for

> optimized compilations. This is no big demand on the compiler really.

Except that this is the wrong answer for Common Lisp. It is only valid
as long as (x-y) is evenly divisible by 2. In Common Lisp, the division
of two integers can (and does) result in a rational number. For
example, in CL: (/ (- 10 7) 2) ==> 3/2
whereas in C: (10-7)/2 ==> 1

That's why the entire discussion using floor came in.

> (I just checked and my gcc does it even without any optimization flags
> turned on.)
>

> For the same reason I would also expect a Lisp compiler to do the same
> (i.e. with speed opt. set and when it can be guaranteed that only the
> first value ist used).

With floor instead of /, agreed. Numeric optimization is not one of
most lisp implementation's strong points [CMUCL excepted, perhaps],
which can be really annoying, particularly for the fixnum case. The
effect is that many functions called on fixnums do not have special
handling (for example oddp, evenp!!!).


Of course, another source of confusion is that CL has the integer type,
but since that includes fixnums (what most programmers really think of
when they say "integer") and bignums, declaring something to be of type
"integer" doesn't let the system do much in the way of optimization at
all.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Michael Hosea

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
In article <3sf5d1$k...@kernighan.cs.umass.edu>,

Kevin Gallagher <ke...@cs.umass.edu> wrote:
>
>C has choosen efficiency over accuracy.

"Accuracy" isn't the right word. In any language there are pitfalls
and good programming practices to avoid them. You even allude to this
in the context of C, but you don't mention that there are pitfalls to
avoid in LISP programming as well. Perhaps "elegance" would be a
better word. I'd agree with the statements: "The design of C favors
efficiency over elegance," and "The (traditional) design of LISP favors
elegance over efficiency."

Regards,

Mike Hosea (815) 753-6740 Department of Mathematical Sciences
http://www.math.niu.edu/~mhosea Northern Illinois University
mho...@math.niu.edu DeKalb, IL 60115, U.S.A.

Chris Page

unread,
Jun 23, 1995, 3:00:00ā€ÆAM6/23/95
to
In article <3s9kk9$l...@Yost.com>, yo...@Yost.com (Dave Yost) wrote:

> Anyone in a position to improve lisp's debugging tools
> *must* see this:
>
> http://lieber.www.media.mit.edu/people/lieber/Lieberary/ZStep/ZStep.html

You're right. This is a very educational paper. I've been thinking about
these kinds of problems for years now. It's amazing how primitive our
debugging tools are.

--
Chris Page | Internet junk mail, advertisements,
Software Wrangler | chain letters, and SPAMs bite...
Claris Corporation |
chris...@powertalk.claris.com | "Cut it out! :-P"

Disclaimer: opinions are not necessarily those of my employer.

Richard Urwin

unread,
Jun 24, 1995, 3:00:00ā€ÆAM6/24/95
to
> is <jyn...@bmtech.demon.co.uk> Scott Wheeler <sco...@bmtech.demon.co.uk>
>> is <creedy-1406...@128.29.151.94> Chris Reedy

>>As someone who knows a fairly large number of programming languages (Lisp,
>>C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and
>>who knew most of them before coming to Lisp, here, off the top of my head,
>>are some of the things I found myself stumbling over.

>I have a similar language background but I don't know Lisp in any
>detail, although in general I try to have a reading knowledge of most
>common languages in order to evaluate their strengths. I thought it
>might be useful to comment why I haven't found it easy to get into.

I also have a similar background, but I have a third point of view. I
program in C to make money, except for one recent project in C++. I
program in XLisp for fun. Contrary to one of the earlier posts in this
thread I find C++ horendously inelegant. C is a rough workhorse, and I
would not call it elegant, but C++ has so many fudges to avoid problems
created by the C mindset that it is much harder to program than XLisp.
(I cannot comment on CLOS.)

>>1. The size of the language.

>After working out a way to do something, I'm not confident that it's
>the *right* way.

Me too. However, if I had the same number of years experience and the odd
course of tuition as I do for C, this would not be the case. There are
plenty of books out there that can teach the *right* way.

>>3. Garbage collection


>I find it awkward to bear in mind that something returned from a
>function is the thing itself, not a copy.

This is not a problem, unless you play about with the dangerous functions
(replaca etc.) Lisp is designed to make this distinction invisible.

>Ok, flamebait section. It looks to me as though most of CLOS is

>designed to support CS research, and is inappropriate elsewhere...

>I've got
>55000 (used to be 85000) lines of C++ in one program, with only one
>"clever" bit in it (implementing a dynamic constructor, i.e. making an
>object who's class is decided at run-time by the data).

Which is trivial in XLisp of course.

>If I were
>to implement in Lisp, I could do it in a subset of Scheme.I don't need

>CL or CLOS, and I find it difficult to see for what sort of commercial
>project I'd need that expressive power.

I work with real time control and graphics, and I can see a use for at
least dynamic construction. I really had problems with C++'s strict
typing. I wanted a list of unrelated objects, trivial in XLisp,
impossible, (or badly error prone because it forgets how big objects
are,) in C++.

But the power of C is in its library, and the same problem exists here.
The reference book is two inches thick and nothing tells you how to do
things. Consider, for example converting a string to a float. How do I
find atof() in the manual? Or is sscanf("%g") as good? Why did I choose
to write my own function when I had to do it? (And I did make the right
decision.)

C also has excessive power for any given project area. A given project
area will not use all of ioctl(), strrchr(), tanhl(), atan2(),
spawnlpe(), intdosx() and inp().

>I just don't have the time to explore it.

Would you expect anyone to know anything about C++ having only used
Pascal before?

Get XLisp or CLisp for free, buy a book or two and practice. It will
pay you back. When you decide that you like it you can buy a compiler
version.

/ ____Richard Urwin____ | | Space is Big. Really Big...You \
/ r...@soronlin.demon.co.uk | | may think it's a long way down \
\ Birmingham, United Kingdom | | the road to the chemists, but /
\ |_| that's just peanuts to space. /

Ken Anderson

unread,
Jun 24, 1995, 3:00:00ā€ÆAM6/24/95
to
In article <TAR.95Ju...@hobbes.ISI.EDU> t...@ISI.EDU (Thomas A. Russ) writes:

In article <...> ho...@random.pc-labor.uni-bremen.de (Holger Duerer) writes:
> Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
> my compiler to genrerate the same code as (x-y)>>1, at least for
> optimized compilations. This is no big demand on the compiler really.

Be careful! This optimization may not be happening the way you you expect
it to. To see this try compiling the following with gcc -O2 -S:

long f5 (long a, long b) { return (a-b)/2; }
long f6 (long a, long b) { return (a-b)>>1; }
unsigned long f7 (unsigned long a, unsigned long b) { return (a-b)/2; }

Except that this is the wrong answer for Common Lisp. It is only valid
as long as (x-y) is evenly divisible by 2. In Common Lisp, the division
of two integers can (and does) result in a rational number. For
example, in CL: (/ (- 10 7) 2) ==> 3/2
whereas in C: (10-7)/2 ==> 1

That's why the entire discussion using floor came in.

Another difference between Lisp and C is that in C, functions like floor or
ceiling are not defined for integers. Writing such a function is tricker
than it might seem since "If either operand is negative, then the choice
(of how to choose the integer closest to the quotient) is left to the
discretion of the implementor." [S.P. Harbison, G.L. Steele Jr., C a
Reference Manual, Prentice Hall, NJ, 1991 p. 187]

> (I just checked and my gcc does it even without any optimization flags
> turned on.)
>
> For the same reason I would also expect a Lisp compiler to do the same
> (i.e. with speed opt. set and when it can be guaranteed that only the
> first value ist used).

With floor instead of /, agreed. Numeric optimization is not one of
most lisp implementation's strong points [CMUCL excepted, perhaps],
which can be really annoying, particularly for the fixnum case. The
effect is that many functions called on fixnums do not have special
handling (for example oddp, evenp!!!).

While CMUCL is quite noteworthy, i don't think this is fair to the other
Lisps implementations out there. From what i've seen, the do try to make
numeric optimization a strong point, and even handle evenp. Of course, we
should keep proding them to do better. Perhaps we should develop a
benchmark set that identifies Lisps that aren't doing all they could. I
enclose a benchmark for floor below.

Of course, another source of confusion is that CL has the integer type,
but since that includes fixnums (what most programmers really think of
when they say "integer") and bignums, declaring something to be of type
"integer" doesn't let the system do much in the way of optimization at
all.

Thanks to everyone for responding to this thread. In my original post, i
presented some relative timings and forgot to put a VALUES in F3. Luckily,
this omission brought in discussions of other subtitles. Here are some
timings for several Lisp's that are within arms reach. The times are for
the body of the functions (the function call and benchmarking overhead has
been removed.) So for example, the Allegro time for F3 is essentially the
time for 3 instructions, CMUCL takes several more instructions, and Lucid
calls an internal routine. Interestingly, the C version of F3, called f5
above, takes 4 instructions.

(defun f1 (a b) (floor (/ (- a b) 2)))
(defun f2 (a b) (floor (- a b) 2))

(defun f3 (a b) (declare (fixnum a b)) (values (floor (the fixnum (- a b)) 2)))
(defun f4 (a b) (declare (fixnum a b)) (ash (the fixnum (- a b)) -1))

;;; KRA 24JUN95: Bitburg, Sparc 10,
microsec. %err What
34.280 0.3 ; (F1 20 5)
5.595 0.4 ; (F1 20 6)
4.718 0.2 ; (F2 20 5)
4.709 0.4 ; (F2 20 6)
0.076 0.2 ; (F3 20 5)
0.074 0.4 ; (F3 20 6)

cmucl 17f
42.280 3.8 ; (F1 20 5)
6.576 0.3 ; (F1 20 6)
4.702 0.3 ; (F2 20 5)
3.667 0.1 ; (F2 20 6)
0.158 0.2 ; (F3 20 5)
0.154 0.2 ; (F3 20 6)
0.155 0.1 ; (F4 20 5)
0.157 0.4 ; (F4 20 6)

Lucid 4.1
29.089 0.2 ; (F1 20 5)
9.745 1.6 ; (F1 20 6)
2.203 0.8 ; (F2 20 5)
2.155 0.4 ; (F2 20 6)
3.595 0.1 ; (F3 20 5)
3.548 0.9 ; (F3 20 6)
0.085 0.5 ; (F4 20 5)
0.085 0.1 ; (F4 20 6)

Scott Wheeler

unread,
Jun 25, 1995, 3:00:00ā€ÆAM6/25/95
to
In Article <804021...@soronlin.demon.co.uk> Richard Urwin writes:
>> ...
[I'm replying in comp.lang.lisp if you're interested, but I think the
thread is moribund here.]


Scott Wheeler

unread,
Jun 25, 1995, 3:00:00ā€ÆAM6/25/95
to
In Article <804021...@soronlin.demon.co.uk> Richard Urwin writes:
>>>3. Garbage collection
>>I find it awkward to bear in mind that something returned from a
>>function is the thing itself, not a copy.
>
>This is not a problem, unless you play about with the dangerous
functions
>(replaca etc.) Lisp is designed to make this distinction invisible.

Which is why I went on to say that it doesn't usually matter in
practice. By the way, most of your answer is missing the fundamental
point: the discussion is about why CL is not popular, and the
difficulties that people have in adapting to it. While I appreciate
your advice, it doesn't alter the point that these are stumbling-blocks
to anyone from a mainly non-GC background.

>I work with real time control and graphics, and I can see a use for at
>least dynamic construction.

Yes, but do you want/need it in the language? I'm happy with
implementing it in a vanilla OO language (Eiffel, C++ etc.).

> I really had problems with C++'s strict
>typing. I wanted a list of unrelated objects, trivial in XLisp,
>impossible, (or badly error prone because it forgets how big objects
>are,) in C++.

Of course you could now use RTTI in ANSI-draft C++. I'm puzzled as to
how one would use a list of completely unrelated objects though.

>But the power of C is in its library, and the same problem exists
here.
>The reference book is two inches thick and nothing tells you how to do
>things. Consider, for example converting a string to a float. How do I
>find atof() in the manual? Or is sscanf("%g") as good? Why did I
choose
>to write my own function when I had to do it? (And I did make the
right
>decision.)

Buy a different reference book, you've got a dud :-). PJ Plauger's
"Standard C library" is about 3/4" thick, contains complete source code
and hints on appropriateness of use, and it would take me about 15s to
find the functions - although I'd actually use the online help from my
compiler. Anyway, that's by the by. More relevant is that C etc. use
libraries, whereas Lisp generally uses language extensions. Yes, I know
you can implement them in Lisp, but in practice they look like extra
bits of the language, carrying new syntax with them: this makes the
language harder to learn. My guess is that this tradition arose partly
because CL antedates CLOS.

Take the loop construct as an example. In C++, or most OO languages, if
you had a new idea for a fancy iteration structure, you'd probably
build an iterator class and an associated cursor class - both possibly
parametrised. In CL, you extend the language. Seen from the outside, it
has a very COBOLish feel just because of this. Of course this ability
to extend the language is a major strength of Lisp in the research
environment, but generally it's the last thing I'd want showing up in
code in a commercial project that has to be maintained by someone other
than the author.

>Would you expect anyone to know anything about C++ having only used
>Pascal before?

What has that to do with the price of tobacco? My interest is in
whether one can gain a working knowledge of the language in a
reasonable evaluation period, then having gained that knowledge judge
whether the language is useful to us. I've done this many times. It's
easiest with languages that have a strict division between syntax and
library, and hardest with languages from the AI community. CL has to be
the largest language that I've come across. By the way, we've got a
semi-tame Lisp hacker with no verbal off switch (hi Jon), so it's
getting a better crack of the whip than say Eiffel, for which I know no
users.

>Get XLisp or CLisp for free, buy a book or two and practice. It will
>pay you back. When you decide that you like it you can buy a compiler
>version.

Thanks, but I've had XLisp for years, and I've got Allegro\PC at the
moment. Evaluating XLisp to decide whether to use CL is about as useful
as evaluating C in order to decide on C++ - they're not the same
language.

Lest anyone miss the point - I'm *not* anti-CL or particularly fanatic
about C++. My impression is that it's a nice language for a full-time
single-seat programmer, but not particularly useful commercially
(considering languages such as Eiffel and Smalltalk as alternatives),
and I've tried to give some idea of the reasons for that. Anyway, have
fun with it (drat, I'm supposed to be a suit) - sorry, have a
productive work session.

Scott

[by the way, I've pruned the newgroups down a bit]

Scott Wheeler

unread,
Jun 26, 1995, 3:00:00ā€ÆAM6/26/95
to
t...@bmtech.demon.co.uk> <19950626...@naggum.no>
X-Newsreader: NewsBase v1.36 (Beta)
Lines: 49

In Article <19950626...@naggum.no> Erik Naggum writes:
>| Of course this ability to extend the language is a major strength
>| of Lisp in the research environment, but generally it's the last
>| thing I'd want showing up in code in a commercial project that has
>| to be maintained by someone other than the author.
>

>a new class extends the C++ type system, Scott, including operator
>overloading, conversion of objects of various types, etc, etc. much
>worse than Lisp, IMNSHO. people seem to want them in commercial
>projects all thetime. (strictly speaking, I don't know whether they
>are maintained. :)

My first reaction was - no, you're wrong, Lisp's language extensions go
much further, and there's a much bigger gap between someone designing
an extension, and the user of that extension, than there is for a class
designer and class user. However I think you do have a point. I very
rarely define operators or type conversions for my C++ classes, and I
think it has helped in maintaining the code. One advantage of learning
C at an early age and getting bitten when I crammed too much into the
control statement of a for() loop - one learns fast: "Don't Be
Clever!".

>it actually seems that language evolution is now in vogue. all my
>favorite tools have cancer. there's a new syntax for one of them
>every other day, and what do you know? people are actually jumping up
>and down screaming for _more_ new syntaxes and _more_ hard-to-learn
>things in those languages.

On the other hand, there is strong interest in minimal OO languages
with the work done in standard class libraries, particularly Eiffel. It
will be interesting to see if any take off.

>Lisp is to blame because it did all this _years_ before these guys
>picked it up, so the language evolution in Lisp requires people to
>study a lot and know a lot of weird science before they can usefully
>extend the language. take a look a the Scheme crowds (plural!). they
>have "language extension" written all over them these days. Lisp is
>to blame because it only let a few arrogant know-it-alls do it on
>their own so the new kid on the chip couldn't put his favorite
>construct in there. no wonder he doesn't want to play.

Err, would you mind standing a bit further away? I'd rather not get hit
by the napalm. Anyway, I'd disagree - CL at least must be one of the
languages with the most support for putting your favourite construct
in, which is a big reason why I think it a bit of a risk in the
commercial environment. The only one I can think that offers stronger
support is the Poplog environment.

Scott

Erik Naggum

unread,
Jun 26, 1995, 3:00:00ā€ÆAM6/26/95
to
[Scott Wheeler]

| Of course you could now use RTTI in ANSI-draft C++. I'm puzzled as to
| how one would use a list of completely unrelated objects though.

the typical "stack" is such a list. if we consider "list" a little
relaxedly, so is the typical "file system". of course, "completely" may
have to be relaxed, too.

| Buy a different reference book, you've got a dud :-). PJ Plauger's
| "Standard C library" is about 3/4" thick, contains complete source code
| and hints on appropriateness of use, and it would take me about 15s to
| find the functions - although I'd actually use the online help from my
| compiler.

I have actually timed myself (aren't computers great?) when I need to look
something up in CLtL2. I average 11 seconds to find what I'm looking for,
usually by going through the very good index. 15 seconds sounds excessive.
man, it adds up to _days_ during a lifetime.

| More relevant is that C etc. use libraries, whereas Lisp generally uses
| language extensions.

Bjarne Stroustrup doesn't think those two are such opposites as you imply.
(no, he's not my hero, he's a very smart gone very berserk. doesn't mean
he won't do a lot of good in between.)

| Take the loop construct as an example.

the loop construct is quite atypical, but I assume you know that.

| Of course this ability to extend the language is a major strength of
| Lisp in the research environment, but generally it's the last thing I'd
| want showing up in code in a commercial project that has to be
| maintained by someone other than the author.

a new class extends the C++ type system, Scott, including operator


overloading, conversion of objects of various types, etc, etc. much worse
than Lisp, IMNSHO. people seem to want them in commercial projects all the
time. (strictly speaking, I don't know whether they are maintained. :)

it actually seems that language evolution is now in vogue. all my favorite


tools have cancer. there's a new syntax for one of them every other day,
and what do you know? people are actually jumping up and down screaming
for _more_ new syntaxes and _more_ hard-to-learn things in those languages.

Lisp is to blame because it did all this _years_ before these guys picked
it up, so the language evolution in Lisp requires people to study a lot and
know a lot of weird science before they can usefully extend the language.
take a look a the Scheme crowds (plural!). they have "language extension"
written all over them these days. Lisp is to blame because it only let a
few arrogant know-it-alls do it on their own so the new kid on the chip
couldn't put his favorite construct in there. no wonder he doesn't want to
play.

seriously, this too, Lisp did before everybody else. I sometimes wonder if
there is a cosmic constant for how many times something must be reinvented
before it is considered fully invented. or maybe it's just the old adage
about pioneers never getting to reap the fruits of their labor.

#<Erik 3013169860>
--
NETSCAPISM /net-'sca-,pi-z*m/ n (1995): habitual diversion of the mind to
purely imaginative activity or entertainment as an escape from the
realization that the Internet was built by and for someone else.

Stefan Monnier

unread,
Jul 3, 1995, 3:00:00ā€ÆAM7/3/95
to
In article <jyn...@bmtech.demon.co.uk>,
Scott Wheeler <sco...@bmtech.demon.co.uk> wrote:
] I find it awkward to bear in mind that something returned from a
] function is the thing itself, not a copy. In practice it doesn't
] usually make a difference, but it always feels unsafe. I'm left with
] the uneasy feeling that if I modify a variable it could have side
] effects I haven't anticipated. Not being used to GC languages, I prefer
] the C++ version where you can decide whether to take a copy or a
] reference to the original. The worst (i.e. hardest for me personally to
] work with) version seems to be Eiffel, where copy or reference
] semantics are specified on a class by class basis.

As an big GC fan I must say it is the first time someone argues against GC based
not on slowness/non-real-timeness/etc... but on semantics. I'm puzzled: I hope
your criticism only goes to CL and its all-pointer based view of the world,
because else you can always choose between reference or copy semantics.
Also, as far as I know (my Eiffel experience is limited and getting old), Eiffel
doesn't use classes for copy/reference semantics: it's more like C++ where the
choice is based on the variable's declaration (with a slight difference: instead
of having "object vs. object pointer" you have "expanded object vs. object" (the
default is reference semantics)).

] Anyway, as you fill your napalm tanks, please bear in mind that I

] browsing the Lisp groups with an open mind. I'm not interested in
] converting you to C++, I just thought it might be useful to explain why
] I don't use CL.

I do use CL but I must admit that CLOS, though nifty, is a dog: I always find
myself using defstruct to overcome the performance problems of defclass.


Stefan

Scott Wheeler

unread,
Jul 3, 1995, 3:00:00ā€ÆAM7/3/95
to
In Article <3t8kjm$q...@info.epfl.ch> Stefan Monnier" writes:
>In article <jyn...@bmtech.demon.co.uk>,
>Scott Wheeler <sco...@bmtech.demon.co.uk> wrote:
>] I find it awkward to bear in mind that something returned from a

>] function is the thing itself, not a copy. In practice it doesn't
>] usually make a difference, but it always feels unsafe. I'm left with
>] the uneasy feeling that if I modify a variable it could have side
>] effects I haven't anticipated. Not being used to GC languages,
I prefer the C++ version where you can decide whether to take a copy or
a
>] reference to the original. The worst (i.e. hardest for me personally
to
>] work with) version seems to be Eiffel, where copy or reference
>] semantics are specified on a class by class basis.
>

> As an big GC fan I must say it is the first time someone argues

> against GC based not on slowness/non-real-timeness/etc... but on
> semantics. I'm puzzled: I hope your criticism only goes to CL and its
> all-pointer based view of the world, because else you can always
> choose between reference or copy semantics.
> Also, as far as I know (my Eiffel experience is limited and getting
> old), Eiffel doesn't use classes for copy/reference semantics: it's
> more like C++ where the choice is based on the variable's declaration
> (with a slight difference: instead of having "object vs. object
> pointer" you have "expanded object vs. object" (the default is >
reference semantics)).

Firstly, I'm not arguing against GC. For some uses, I'm sure it's the
correct choice, at least on convenience grounds when you are used to
it. I was talking purely about what I found difficult myself (the
subject matter of the original question), with no implication that this
made the language wrong in some way.

On Eiffel, by the way, I'd guess you are used to Eiffel 2. In Eiffel 3,
classes can be defined as "expanded", having copy semantics. Hence
INTEGER is derived from INTEGER_REF, with no additional features, but
with the "expanded" tag. Perhaps I am unjust, but this looks like
creeping perfectionism - "Everything must be an object, everything must
behave the same" - while trying to get numerics to behave sensibly.
By the way, you are correct in saying that you can also add "expanded"
to a single variable.

Scott

Harley Davis

unread,
Jul 4, 1995, 3:00:00ā€ÆAM7/4/95
to

In article <jyp...@bmtech.demon.co.uk> Scott Wheeler <sco...@bmtech.demon.co.uk> writes:

In Article <3t8kjm$q...@info.epfl.ch> Stefan Monnier" writes:
>In article <jyn...@bmtech.demon.co.uk>,
>Scott Wheeler <sco...@bmtech.demon.co.uk> wrote:
>] I find it awkward to bear in mind that something returned from a
>] function is the thing itself, not a copy. In practice it doesn't
>] usually make a difference, but it always feels unsafe. I'm left with
>] the uneasy feeling that if I modify a variable it could have side
>] effects I haven't anticipated. Not being used to GC languages,
I prefer the C++ version where you can decide whether to take a copy or
a
>] reference to the original. The worst (i.e. hardest for me personally
to
>] work with) version seems to be Eiffel, where copy or reference
>] semantics are specified on a class by class basis.

The original objection seems a little odd for someone used to
programming in Lisp. Most Lisp libraries do not document the
internals of an object, and there is no equivalent to C/C++ header
files. All access to objects is via functions (although the use of
setf with an accessor-like function certainly provides a hint that an
object may be modified). In any case, and in any language, it is
certainly up to the library designer to decide what objects should and
shouldn't be modified --- not up to the user to decide if he "takes" a
copy or pointer; after all, what if some object is *intended* to be
modified? It is therefore also the library designer's responsibility
to document any potentially side-effects that might crop up.

In Lisp, primary application objects don't seem to have this problem
very often since their high-level API provides the necessary security
to avoid unwanted low-level modifications. However, sometimes there
is a temptation on the part of a library designer to expose an
internal list maintained by the library for efficiency reasons, with
big warnings not to destructively modify the list. Unfortunately,
beginners don't always know when they are destructively modifying
lists and this can lead to very obscure bugs. So most mature Lisp
libraries never expose internal structures and bugs of this sort are
avoided.

I don't see how this issue connects much with GC.

-- Harley Davis
--

------------------------------------------------------------------------------
Harley Davis net: da...@ilog.fr
Ilog S.A. tel: +33 1 46 63 66 66
2 Avenue GalliƩni, BP 85 fax: +33 1 46 63 15 82
94253 Gentilly Cedex, France url: http://www.ilog.com/


HStearns

unread,
Jul 8, 1995, 3:00:00ā€ÆAM7/8/95
to
Scott, you've said that your "interest is in whether one can gain a
working knowledge of the language in a reasonable evaluation period" and
you site loop constructs as an example of a core part of the language
which makes it hard for you to do this.

I agree that the syntactical differences in loop from other parts of the
language can be confusing. I am curious, though, what made you feel that
loop was a core part of the language which needed to be studied in order
to gain a "working knowledge of the language."

The way I think about it is this:

The basic rules for Lisp are very few, and very consistent -- especially
as compared to C. (The price for this is a syntax that many people don't
like, but which others, such as myself, don't mind. Ce la vie.) As a
result, Lisp can be taught in an hour, and often is.

It also happens that the expresive power of Lisp can be used to create a
lot of very handy utilities -- some of which use different programming
styles. One can do well with just the basics learned in an hour, happily
creating whatever tools you need to get the job done. In some cases, one
might wonder if someone hasn't already solved some particular problem, and
might look through Steele and other texts for a solution. In the case of
iteration, that solution might be found in a (perhaps) dizzying array of
tools including: loop, mapping utilities, do, sequencers, iterators,
streams and even goto. For me, these options don't make Lisp harder to
learn, just easier to use.

Unfortunately, I believe your viewpoint is not unique. Many people look
at the entire contents of the ANSI standard as being the "core" language.
Can you shed any light on why this perception exists, and what Lisp
providers and educators might do to let users just learn the fundamentals
and go have fun?


Robert Elton Maas, 14 yrs LISP exp, unemployed 3.8 years

unread,
Jul 9, 1995, 3:00:00ā€ÆAM7/9/95
to
Why is this thread cross-posted to comp.lang.lisp.x, where I'm seeing
it, since it has nothing specifically to do with XLISP?? Shouldn't this
thread be posted ONLY to the general LISP advocacy/discussion
newsgroup? If you-all agree, could future followups omit
comp.lang.lisp.x and any other inappropriate newsgroups?

Marc Wachowitz

unread,
Jul 9, 1995, 3:00:00ā€ÆAM7/9/95
to
HStearns (hste...@aol.com) wrote:
> Many people look
> at the entire contents of the ANSI standard as being the "core" language.
> Can you shed any light on why this perception exists, and what Lisp
> providers and educators might do to let users just learn the fundamentals
> and go have fun?

I think EuLisp provides some good hints about this: Structure the language
into levels and libraries, such that lower levels and non-library aspects,
as well as the separate libraries, can be learned mostly in isolation, but
of course in design consider the seamless interaction of those pieces. For
more information about EuLisp, look at "ftp://ftp.bath.ac.uk:/pub/eulisp".
(Btw, does anyone have information about the progress of EuLisp? Since 93,
there's version 0.99 of the language definition on that server.)

------------------------------------------------------------------------------
* wonder everyday * nothing in particular * all is special *
Marc Wachowitz <m...@ipx2.rz.uni-mannheim.de>

Message has been deleted

Scott Wheeler

unread,
Jul 12, 1995, 3:00:00ā€ÆAM7/12/95
to
In Article <DBIFF...@rheged.dircon.co.uk> Simon Brooke writes:
>...
>Unfortunately many people in Western societies have lost the
>confidence to play. They fear to fail, and surround their work with
>elaborate structures of safety nets. They use only tools they
>believe they understand fully --


Whurgh! FLAME BAIT! I wasn't going to continue in this thread again as
I thought I'd covered everything it was useful to say. But may you
attempt to work with looped structures in a reference-counted
environment for your sins.

Now look, I love hacking. I'm presented (slowly) crufting up an occam
compiler in Eiffel for the fun of it. I've written a couple of OO
varieties of C (before C++ was commonly available), one of which looked
vaguely CLOSish with multiple dispatch. But this *doesn't* mean I'm
going to take chances on a different language when there are jobs at
stake just because it might be fun, particularly when our existing C++
is also fun. What I'm trying to do is to find out what may be a
successor to C++ 3-5 years from now, and decide whether we should be
doing any pilot work in it now. My best guesses are Smalltalk and
Eiffel at the moment, though it's far from clear. Now *personally*, I
happen to like hacking in languages like Icon, but as a suit, I'd have
to be stupid to commit to that platform with only one supplier, and
that freeware.

>
>this is nonsense of course. Any modern computer system is way too
>complex for anyone to understand fully. Like any other work of Magick,
>you have to put some faith in the competence of the other wizards.

What a load of dingo's kidneys. Look, go over to comp.lang.eiffel, and
see the fuss there about the instability of the PEW Eiffel compiler.
Suppose I'd trusted Meyer (high priest of reliable software) to
produce - I'd really be stuffed now. I don't trust *anyone* unless I
can check their work myself, and I'm always edgy unless I have back
doors. I don't buy this "specialisation" rubbish either. I may not keep
everything swapped in, but anything short of the details of CPU chip
architecture is easy enough to find out about, and easy enough to
understand.

>-- and consequently they'll never learn LisP. They'll never achieve
>much, either. It isn't because they won't learn LisP that they wont
>achieve much: it's because they've forgotten how to play.

Scruttocks, my dear chap. There's been some interesting work done in
Lisp, but there's hardly a monopoly. And naff-all commercial software,
though that's not necessarily something to hold against the language.

Scott

Harley Davis

unread,
Jul 12, 1995, 3:00:00ā€ÆAM7/12/95
to

In article <3to8bn$1...@trumpet.uni-mannheim.de> m...@ipx2.rz.uni-mannheim.de (Marc Wachowitz) writes:

HStearns (hste...@aol.com) wrote:
> Many people look
> at the entire contents of the ANSI standard as being the "core" language.
> Can you shed any light on why this perception exists, and what Lisp
> providers and educators might do to let users just learn the fundamentals
> and go have fun?

I think EuLisp provides some good hints about this: Structure the
language into levels and libraries, such that lower levels and

non-library aspects, was well as the separate libraries, can be


learned mostly in isolation, but of course in design consider the
seamless interaction of those pieces. For more information about
EuLisp, look at "ftp://ftp.bath.ac.uk:/pub/eulisp". (Btw, does
anyone have information about the progress of EuLisp? Since 93,
there's version 0.99 of the language definition on that server.)

EuLisp itself has not undergone very much evolution since 93. (Some,
but not much.) The EuLisp committee no longer has EEC financing and
so work has slowed down. The good news is that implementation work
continues. Julian Padget's group at the University of Bath is still
working on FEEL, the public domain implementation of EuLisp, and Ilog
is, of course, still doing Ilog Talk, our implementation of the
proposed ISO Lisp standard with many EuLisp-based extensions. For
information on Ilog Talk, please contact in...@ilog.com or visit our
Web server at <URL:http://www.ilog.com/>.

-- Harley Davis
--

-------------------++** Ilog has moved! **++----------------------------
Harley Davis net: da...@ilog.fr
Ilog S.A. tel: +33 1 49 08 35 00
9, rue de Verdun, BP 85 fax: +33 1 49 08 35 10

Marcus Daniels

unread,
Jul 12, 1995, 3:00:00ā€ÆAM7/12/95
to
>>>>> "Scott" == Scott Wheeler <sco...@bmtech.demon.co.uk> writes:

> In Article <DBIFF...@rheged.dircon.co.uk> Simon Brooke
> writes:

Simon> this is nonsense of course. Any modern computer system is way too
Simon> complex for anyone to understand fully. Like any other work of
Simon> Magick, you have to put some faith in the competence of the other
Simon> wizards.

Scott> What a load of dingo's kidneys. Look, go over to
Scott> comp.lang.eiffel, and see the fuss there about the instability
Scott> of the PEW Eiffel compiler. Suppose I'd trusted Meyer (high
Scott> priest of reliable software) to produce - I'd really be stuffed
Scott> now. I don't trust *anyone* unless I can check their work
Scott> myself, and I'm always edgy unless I have back doors. I don't
Scott> buy this "specialisation" rubbish either. I may not keep
Scott> everything swapped in, but anything short of the details of CPU
Scott> chip architecture is easy enough to find out about, and easy
Scott> enough to understand.

There is a leap-of-faith when you use any software. Learning your way
around any large language package is a big investment of time. Hell,
learning what constitutes correct behavior is a big investment of
time. Typically, it is impossible to justify this cost. So only
the people who accept it as play will evolve skills.

In business, the concern is security. One way to have security is to
make sure nothing goes wrong: reliable software. Of course, the usual
way, is to have insurance and lawyers. The OO authorities merely
exist to validate to execs. the fact that getting software right isn't
easy. Spend money, and be responsible.

If methodologies the OO authorities offer are really so powerful, they
should implement them, and write reliable programs. BUT OBVIOUSLY,
the reason people don't use their stultifying ideas is because it
inhibits synthesis. The point is that suits can utilize these
design minded people and find someone to blame. Perfect sense!

..he said "some faith" not "blind faith". One shouldn't be
in the position of being inhibited (or paralyzed) from fixing a bug.


Scott Wheeler

unread,
Jul 13, 1995, 3:00:00ā€ÆAM7/13/95
to
>There is a leap-of-faith when you use any software. Learning your way
>around any large language package is a big investment of time. Hell,
>learning what constitutes correct behavior is a big investment of
>time. Typically, it is impossible to justify this cost. So only
>the people who accept it as play will evolve skills.

Not entirely true. You need one or two "pioneers" in a department to
explore a new field, typically in their own time (which is what I do).
Having done that, some others are going to "evolve skills" in a
successful area purely because they've been told to.

>In business, the concern is security. One way to have security is to
>make sure nothing goes wrong: reliable software. Of course, the usual
>way, is to have insurance and lawyers.

[checks - yup, this is coming from a US address]

>The OO authorities merely exist to validate to execs. the fact that
>getting software right isn't easy.

Codswallop.

>Spend money, and be responsible.
>
>If methodologies the OO authorities offer are really so powerful, they
>should implement them, and write reliable programs.

At first sight, this is reasonable. I tend to find that a lot of the
"names" in the field are dreadful programmers - have a look at the
implementation of Wirth's Oberon, for instance. Yet Wirth is still
worth listening to, providing you take him with a pinch of salt. Even
more so, Meyer is worth looking at. By the way, how did we get on to
"methodologies"? I didn't mention them - I don't even use the word
since it's pidgin English ("methodology"="study of methods", not
"method")

>BUT OBVIOUSLY,
>the reason people don't use their stultifying ideas is because it
>inhibits synthesis. The point is that suits can utilize these
>design minded people and find someone to blame. Perfect sense!

By 'eck, you're even more cynical than me! Some of these ideas *are*
used successfully. I've seen some very impressive results with
Schaer-Mellor, for instance, which we used to make sense of a huge
project for industrial design software. While I'm not keen on them for
use in every case or even most, your "inhibits synthesis" sounds very
like the "constrains creativity" squeals that accompanied structured
programming.

Scott

Message has been deleted

Scott Wheeler

unread,
Jul 24, 1995, 3:00:00ā€ÆAM7/24/95
to
In Article <DC54C...@rheged.dircon.co.uk> Simon Brooke writes:
>Certainly. And intentionally. But thinking man's flame bait,.

So what are you complaining about? You got a thinking man flaming back.

>and it
>would have been better to have read and understood it before replying

<sighs deeply. Pats Simon gently on the head with a 6lb mallet>

>>>...


>>>this is nonsense of course. Any modern computer system is way too

>>>complex for anyone to understand fully. Like any other work of

Magick,
>>>you have to put some faith in the competence of the other wizards.
>>
>>What a load of dingo's kidneys. Look, go over to comp.lang.eiffel,
and
>>see the fuss there about the instability of the PEW Eiffel compiler.
>>Suppose I'd trusted Meyer (high priest of reliable software) to
>>produce - I'd really be stuffed now. I don't trust *anyone* unless I
>>can check their work myself,
>
>I presume you understand the movement of electrons through the gates
>of your Pentium (4/2=1.9999999999) then?

Yup. Better than you, at any rate: 4.0/2.0 = 2.0 exactly on a Pentium,
and if your compiler says differently, you need to check the
float<->ASCII routines. And yes, to the extent that it affects any of
my systems (and only to that extent), I check stuff and read the
literature. In this case, I'm mainly interested to the extent of making
sure the problems won't affect my work (although I took an academic
interest in the causes). I'm not concerned with above ambient radiation
levels, so movement of electrons through gates doesn't bother me
(though again, I do have some interest in CPU architecture). I don't do
critical FP work, so I didn't check the Pentium architecture. Someone
else *did* need to establish its correctness, didn't trust the
"wizards" and showed the integer bruising problem as a result. Umm,
love-15, I think.

>Or the precise details of the
>handshaking on your SCSI bus? I don't. I don't want to.

No-one wants to. But my point was that you can't trust components
untested. As a matter of fact, historically we've had a lot of problems
with the Sun implementation of SCSI, but I don't need to go to the
level of handshaking to prove that (but purely out of interest, yes, I
did once learn this stuff - it wasn't terribly complex).

>I know if I
>messed my mind with that sort of detail I couldn't do my job.
>Remember: your average desktop computer has more individual working
>parts than a jumbo-jet -- and that's before you load the software.

They are complex, but not all that complex, particularly as a Jumbo
includes computers. However you are setting up paper tigers here - I'm
not necessarily concerned with *why* a system doesn't work, but with
proving that it is the case (though could you have predicted that
changing a keyboard on a PC could speed up the system without knowing
about the A20 line and its relation with the 8048 processor?). Usually
this is software, but occasionally hardware is relevant too - for
instance I've just put a DAT drive on the mv British Steel (a bulk
carrier) for data logging purposes. Since it won't come back for 6
weeks, it is critical to find out whether the thing fails beforehand.

Now to return to our onions, the question is whether I am prepared to
trust "other wizards" to be competent in their software "Magick". No,
I'm not. Particularly if they go all pseudo-mystic on me, because I've
got this strange idea that it correlates with them holding ideas like a
mis-quoted version of Murphy's Law being responsible for their errors
[have you ever come across the story of why the RAF had to start up
propaganda against the idea of "gremlins"?].

In *every* system for which I've obtained source, I've found problems.
Sometimes they aren't serious, but on balance putting the effort into
checking other people's work has been financially worthwhile. I
remember in particular the second system I went through, (a controller
for a TL oven, to be used for commercial dating of art ceramics for
certification purposes). There was a small but critical problem which
would have given the wrong answers if we loaded more than 8 samples.
This would have left us with some exposure to large law-suits.

More to the point, I was originally commenting on wanting to know what
sort of manipulations Common Lisp is doing in memory, in order to know
what operations might be dangerous. That's not even a question of
trusting the "wizards", because if I assume they are working to
specification, there can still be dangers in a language. To give a
concrete example, I've just been looking at the Steele documentation of
"delete". Now I'm not entirely sure, but it looks as though it could be
dangerous - *unless* I understand the details of what it is doing.

>>>-- and consequently they'll never learn LisP. They'll never achieve
>>>much, either. It isn't because they won't learn LisP that they wont
>>>achieve much: it's because they've forgotten how to play.
>>
>>Scruttocks, my dear chap. There's been some interesting work done in
>>Lisp, but there's hardly a monopoly. And naff-all commercial
software,
>>though that's not necessarily something to hold against the language.
>

>Breath deep, read, understand. I didn't say that you could only do
>good work in LisP. I said, only people who play can do good work,

True, you said it. However you haven't provided any justification for
your statement, and I doubt its veracity. I've seen good software
written by people who play (and I would point out that you've edited
out of your quotes my own interests in this). I have also seen good,
novel software written by serious-minded Germans working in boring
languages and going home at 5pm sharp every day. I find them tedious
company, but so what?

> and
>that LisP is a language which encourages and rewards play.

Interestingly, our Lisp hacker disagrees - I've no idea why, as I tend
to agree with you.

>A person
>who can play will do good work with whatever tools they choose

Here you reverse the direction of logical implication (de Morgan). This
is much more contentious than your previous statement. It is not the
case that just any person who plays with programming will produce good
work, although some do.

>------- si...@rheged.dircon.co.uk (Simon Brooke)
> Here in Galloway we have four seasons. There's Winter, Winter
> II, Son of Winter, and Winter the Prequel.

Luxury!. Why when I were a lad we used to *dream* of a nice warm winter
on a bit of land sticking out into the Gulf stream...

Scott

0 new messages