Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reference Counting (was Searching Method for Incremental Garbage Collection)

224 views
Skip to first unread message

Tom O Breton

unread,
Nov 23, 1994, 1:26:06ā€ÆAM11/23/94
to
OK, I've been following this thread for a bit, and there are two
objections to reference counting that IMO aren't as strong as they
appear. I await other opinions of course.

0: It orphans circular structures.

Well, if you do it naively, sure. But the way I've always handled such
memory is that there's a top structure (head pointer, usually) that owns
everything inside. Everything outside just borrows memory. Anything that
includes circular pointers is complex enough that it belongs in such an
encapsulation. Obviously you would count references to the head, not to
every member.

1: It's slow, due to count update overhead.

Again, I get the impression that what one has in mind is a very naive
implementation. Say, one that "thrashes" every time there's a loop that
copies and destroys a pointer. I have even heard people who consider
that finding the length of a list in Lisp inc/dec's for _every step_!
The purpose of reference counting is to handle "sibling" deep pointers
where you don't know which one will get destroyed first. Reference
counting anything whose order of destruction you can predict at compile
time seems pointless.

Tom

--
t...@world.std.com
TomB...@delphi.com: Author of The Burning Tower

Henry G. Baker

unread,
Nov 23, 1994, 1:42:07ā€ÆAM11/23/94
to

Good ideas. Now can you formalize these rules & prove conditions
under which any & all garbage will be collected? I'm trying to
understand _provably safe styles_ of programming.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hbaker/home-page.html

Henry G. Baker

unread,
Nov 23, 1994, 8:23:02ā€ÆPM11/23/94
to
In article <Czqu5...@world.std.com> t...@world.std.com writes:

>hba...@netcom.com (Henry G. Baker) writes:
>> Good ideas. Now can you formalize these rules & prove conditions
>> under which any & all garbage will be collected? I'm trying to
>> understand _provably safe styles_ of programming.
>
>Wow, we must really be approaching it from different perspectives.
>
>You seem to want a more formal(?) academic(?) statement, which I'm at a
>loss to frame. I probably have some implicit assumption, or assumptions,
>that you don't. To me, as a programmer, hearing or saying "there's a top
>structure that owns everything inside." expresses the idea precisely
>enough. I'm not sure what it is you want me to make more precise.

I'm not trying to be dense. What does 'own' mean? What does 'inside'
mean? How is ownership transferred and/or controlled? What happens
in the presence of exceptions? These are the kinds of things that
need to be made precise in order to convert good ideas into a robust,
workable _style_ of programming.

Tom O Breton

unread,
Nov 24, 1994, 12:02:26ā€ÆAM11/24/94
to
hba...@netcom.com (Henry G. Baker) writes:
> I'm not trying to be dense.

Right, of course not. I ftp'ed a paper of yours yesterday, you're
obviously not dense.

> What does 'own' mean?

Like in real life, an exclusive association. If A owns something, B, C,
and D et. al. do not own it.

Or to put it another way, when resources such as memory are "owned",
they are controlled by a single thing, and that thing is the only thing
that acquires them and releases them. Non-owners might use the resources
with permission, but only the owner can release them.

If the owner does not disrupt the balance (IE, acquisitions exactly
match releases), and maintains everything inside in a predictable
acquisition-status (EG, a double-linked list might start and end empty,
add every node it allocates, and remove every node it deletes) then we
know that everything inside is not garbage.

The "predictable acquisition-status" need not be the same for everything
inside, it just has to be predictable. For instance, we might have a
convention that some pointers are valid and some aren't, and (of course)
only the valid pointers are in the acquired state. We just have to be
sure that we can always tell them apart, for instance by setting
pointers to a NULL value when we invalidate them.

> What does 'inside' mean?

Depends on exactly what your data structure is. Let's say it's a double
linked list. If you traverse the entire list, the nodes that you
encounter are inside, everything you don't encounter is outside.

But let me see if I can frame a more general answer: Inside == reachable
through a certain set of operations from a given starting point. EG, for
a dll, the operations might be get_prev and get_next and the starting
point might be the head node. Obviously you want to deal with sets of
operations that allow insideness to be controlled from a single point or
at least easily controlled.


> How is ownership transferred and/or controlled?

Transferred: The key is that a resource is owned by exactly one thing,
so if B wants to own something A owns, either A stops owning it (EG: the
node in question is removed from the dll), or B acquires something
equivalent (EG: B makes or receives another copy, which B then owns.)


Controlled: If I understand your question, it depends on what tools your
language gives you. Some languages of course will let you stomp all over
everything and never tell you. I prefer strong typing for control, where
a deep pointer is a different type than a shallow pointer, and only
legal operations can happen among them. But that's just a preference.

> What happens in the presence of exceptions?

Depends how far you unwind. If you unwind far enough that the owner
disappears, then it obviously can no longer own resources. Since it
should have no resources after it "dies", it must either release them or
give them to something else.

If you unwind out of acquisition/releasing routines, obviously you want
to correctly communicate in what state they left the resource.


> These are the kinds of things that need to be made precise in order to
> convert good ideas into a robust, workable _style_ of programming.

Thanks, that was much easier to answer.

Tom O Breton

unread,
Nov 23, 1994, 6:20:09ā€ÆPM11/23/94
to
hba...@netcom.com (Henry G. Baker) writes:
> Good ideas. Now can you formalize these rules & prove conditions
> under which any & all garbage will be collected? I'm trying to
> understand _provably safe styles_ of programming.

Wow, we must really be approaching it from different perspectives.

You seem to want a more formal(?) academic(?) statement, which I'm at a
loss to frame. I probably have some implicit assumption, or assumptions,
that you don't. To me, as a programmer, hearing or saying "there's a top
structure that owns everything inside." expresses the idea precisely
enough. I'm not sure what it is you want me to make more precise.

Tom

Kevin Warne

unread,
Nov 25, 1994, 6:22:27ā€ÆPM11/25/94
to
In article <CzpJ7...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>objections to reference counting that IMO aren't as strong as they
>appear. I await other opinions of course.
>
>0: It orphans circular structures.
<stuff about good programming techniques snipped>

>1: It's slow, due to count update overhead.

<more stuff about programming techniques snipped>

Of course, if you pare down the references to the minimum needed
and inc/dec as necessary, you'll have a fairly efficient system.
Unfortunately it won't be automatic.

I think it's important to maintain a distinction between reference
counting as accounting for explicit memory management and reference
counting collectors as a form of automatic memory management.

As a form of acounting, reference counts (or mark bits, or whatever)
are needed to track liveness. To properly deallocate memory, you
need this accounting information, the actual deletion code, and a
well-formed policy for your application about just who is responsible
for reclaiming memory. With allocated memory comes the responsibility
to deallocate that memory. That responsibility is handed off from
one part of the code to another (either explicitly or inside the
programmer's mind) until that object is deleted.

I believe this is what you were referring to when you wrote about
top structures and head pointers..

With an automatic memory manager, this responsibility along with the
actual deletion code stays inside the memory manager. The programmer
doesn't need to worry about transferring the responsibility to delete
objects because the memory manager will take of it.

So my take on the subject is as follows. If you're keeping track of
objects for explicit storage management (where you pass responsibility
for deleting objects back and forth) then reference-counting is yet
another tool for keeping track of accounting information. If you're
relying on your memory manager to keep track and deallocate dead
objects for you, then reference-counting isn't so hot...

Kevin

____
Kevin Warne / kev...@whistler.com / +1 (604) 932-0606
Suite 204, 1200 Alpha Lake Road, Whistler BC, CANADA, V0N 1B0
Warne's Garbage Collector is the best way to allocate memory in C++ under NT

Henry G. Baker

unread,
Nov 25, 1994, 8:10:07ā€ÆPM11/25/94
to
In article <3b5rjj$k...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
>In article <CzpJ7...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>>objections to reference counting that IMO aren't as strong as they
>>appear. I await other opinions of course.
>>
>>0: It orphans circular structures.
><stuff about good programming techniques snipped>

There are some cases where one or more elements of the circular
structure can't be reclaimed for other reasons. A good example is the
case of 'properties' on 'property lists' of traditional Lisp (but not
usually Scheme) systems. Since someone can create a reference to a
symbol out of thin air (i.e., by calling '(read)', or by interning a
character string at run-time), symbols can _never_ be reclaimed by the
standard garbage collector if they have ever been given a print name.
(I have argued elsewhere that 'gensym' should _not_ produce symbols
with print-names, and the print-names should be given out the first
time one tries to print them.)

Therefore, the programmer will have to go to some explicit effort --
even in Lisp -- to reclaim certain structures, whether or not they
have cycles.

>>1: It's slow, due to count update overhead.
><more stuff about programming techniques snipped>
>
>Of course, if you pare down the references to the minimum needed
>and inc/dec as necessary, you'll have a fairly efficient system.
>Unfortunately it won't be automatic.

I'm not sure what you mean by 'automatic' in this context. If you
mean 'oblivious to the programmer's coding style', then I suppose that
you are right. But what system programmer is ready to give up his
right to explicitly manage certain resources in certain situations?

What _would_ be nice, would be some way to know at _compile time_ if
certain constraints were violated, so that one would immediately know
that his program would eventually fail at run-time due to premature
evacuation (GC), or a storage leak (object never gets reclaimed).

The 'linear' style of programming achieves this goal, and I expect
that there may be other styles of programming which have interesting
compile-time and/or run-time properties. I guess I'm not ready to
concede that all of the important memory management ideas were found
by 1960 (at least reference counting and GC; see CACM 1960 for both
Collin's and McCarthy's papers).

>As a form of acounting, reference counts (or mark bits, or whatever)
>are needed to track liveness. To properly deallocate memory, you
>need this accounting information, the actual deletion code, and a
>well-formed policy for your application about just who is responsible
>for reclaiming memory. With allocated memory comes the responsibility
>to deallocate that memory. That responsibility is handed off from
>one part of the code to another (either explicitly or inside the
>programmer's mind) until that object is deleted.

What seems to be needed is some way to indicate 'ownership
responsibilities' as part of the _type_ of an object, so that such
things can be useful at compile time. Obviously, if the rules are
simple enough for a programmer to learn and memorize, then there is
some hope that one could build them into compile-time tools to do the
same things.

>I believe this is what you were referring to when you wrote about
>top structures and head pointers..
>
>With an automatic memory manager, this responsibility along with the
>actual deletion code stays inside the memory manager. The programmer
>doesn't need to worry about transferring the responsibility to delete
>objects because the memory manager will take of it.

I'm not sure what fraction of all programmers is able (or ready) to
give up complete control of resource management to a 'black box'
system. One need only look to human economic systems to see that
'private ownership' of resources can often be considerably more
efficient in the utilization of the available resources than can
'system' (i.e., governmental) management.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hb/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hb/hbaker/home.html
************* Note change of address ^^^ ^^^^

Tom O Breton

unread,
Nov 25, 1994, 11:22:16ā€ÆPM11/25/94
to
kev...@whistler.com (Kevin Warne) writes:
> Unfortunately it won't be automatic.
>
> I think it's important to maintain a distinction between reference
> counting as accounting for explicit memory management and reference
> counting collectors as a form of automatic memory management.

OK. Actually I agree. Doing it without a lot of programmer effort, and
without introducing error opportunities is important. But so is opening
opportunities for efficiency.

Can what I describe be automated, or largely automated? I think at best
it could be largely automated, requiring a keyword or so and giving
compiler errors under conflicting conditions rather than runtime
failures. (Obviously only applicable to compiled languages)

Some optimizations can be automatic, like hoisting reference count
updates out of loops, or to the endpoints of an entire routine.

Others, like constraining a structure to only take references it can
own, would require communication from the programmer, though I'd love to
be shown wrong.

Perhaps the ideal would be to develop something with GC and then turn it
off when (if) you got around to fine-tuning it.

Kevin Warne

unread,
Nov 26, 1994, 11:57:07ā€ÆAM11/26/94
to
In article <CzuxH...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>Others, like constraining a structure to only take references it can
>own, would require communication from the programmer, though I'd love to
>be shown wrong.

I think that future programming languages will have constructs for compile-time
behavior as well as run-time behaviour. The compile time behavior will be a
superset of what is handled now with makefiles, macros, preprocessors, etc.
Through code that executes at compile-time, developers will be able to specify
exactly how their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now, probably not...


>Perhaps the ideal would be to develop something with GC and then turn it
>off when (if) you got around to fine-tuning it.

Waitasec... The implication here is that GC is and will always be slower than
hand-coding all the deletes. That's just not true.

1. A collector deallocates a large number of objects all at once. Hand-coded
deletes happen 1 at a time. This makes a difference when writing the
memory manager. The deallocation costs for a good collector are minimal.
(e.g., copying + non-copying collectors, implicit reclamation, etc.)
The deallocation costs for a good explicit memory manager are about the
same (or slightly higher) as the allocation costs.

2. Deallocation is part of the critical path when using explicit memory
management. A spreadsheet recalc that allocates a lot of temporary
memory is going to have to deallocate that memory. And the user is going
to have to wait through that deallocation. A collector can defer collection
to application idle time, further decreasing the cost of deallocation.

4. For short-lived processes, a collection might not be required at all.
This dynamic optimization isn't available to applications that hard-code their
deletes.

5. Collectors can be made parallel. Parallel collectors can run on separate
processors, with the appropriate speed benefits for otherwise singlethreaded
applications.

6. Collectors don't need to be black boxes for speed-hungry developers. There
are many collectors with very flexible interfaces. Developers can use their
knowledge of a collector's operation to further tune their application. And
a good collector should be flexible enough to coexist with an explicit memory
manager.

7. Collectors just get faster as users add more memory.

Now obviously the boundary between what is and is not GC is very fuzzy. As is the
boundary between what is and is not automatic.

But reguardless of where you draw the line, I suggest that most applications today
would benefit from the use of GC.

Kevin Warne

unread,
Nov 26, 1994, 12:42:32ā€ÆPM11/26/94
to
In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
>(I have argued elsewhere that 'gensym' should _not_ produce symbols
>with print-names, and the print-names should be given out the first
>time one tries to print them.)
>
>Therefore, the programmer will have to go to some explicit effort --
>even in Lisp -- to reclaim certain structures, whether or not they
>have cycles.

Hmm, sounds like the granularity of reference/liveness is too coarse
in this case for proper collection (although it might be just right
for other reasons).


>I'm not sure what you mean by 'automatic' in this context. If you
>mean 'oblivious to the programmer's coding style', then I suppose that
>you are right. But what system programmer is ready to give up his
>right to explicitly manage certain resources in certain situations?

Indeed. Using a mixture of the two is probably the best policy right
now.


>The 'linear' style of programming achieves this goal, and I expect
>that there may be other styles of programming which have interesting

Are there any usuable languages yet for acutally programming in this
style? (I know... I should check out your FTP cache first before
asking...)


>compile-time and/or run-time properties. I guess I'm not ready to
>concede that all of the important memory management ideas were found
>by 1960 (at least reference counting and GC; see CACM 1960 for both
>Collin's and McCarthy's papers).

On the contrary. I think that memory management is just now beginning
to get interesting.


>I'm not sure what fraction of all programmers is able (or ready) to
>give up complete control of resource management to a 'black box'
>system. One need only look to human economic systems to see that
>'private ownership' of resources can often be considerably more
>efficient in the utilization of the available resources than can
>'system' (i.e., governmental) management.

That comparison is reversed (and probably not all that accurate anyways
as objects are a lot less complex than people).

I assume you're making a comparison between global optimization and
control as opposed to multiple distributed entities each optimizing
locally.

Using a collector, each method and object can locally allocate and use
memory without worrying about how to deallocate that memory. Memory
usage can become dynamic. For instance, a method may allocate a new
object and return that object as a matter of course. If the caller
doesn't use the returned object then it will be reclaimed automatically
at little cost.

Contrast this to existing C++ practice... The programmer (the gov't in
this example) has to set out a global allocation policy and manually
trace the lifespan of each object... If a class is reused in a
second application then this lifespan analysis has to be redone
and allocations retraced.

John Max Skaller

unread,
Nov 26, 1994, 10:52:49ā€ÆAM11/26/94
to
>hba...@netcom.com (Henry G. Baker) writes:
>> Good ideas. Now can you formalize these rules & prove conditions
>> under which any & all garbage will be collected? I'm trying to
>> understand _provably safe styles_ of programming.
>
>Wow, we must really be approaching it from different perspectives.
>
>You seem to want a more formal(?) academic(?) statement, which I'm at a
>loss to frame. I probably have some implicit assumption, or assumptions,
>that you don't. To me, as a programmer, hearing or saying "there's a top
>structure that owns everything inside." expresses the idea precisely
>enough. I'm not sure what it is you want me to make more precise.

Yep. Bertrand Russell had one of them too. Only he spent
ages trying to make it precise -- and still only found out just
in time to slip a page into his book saying "Woops -- the whole
thing is flawed!"

You may think that a set is a collection of things,
but Russell showed vague, intuitive (naive) notions are misleading.

See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.

To compare a naive and formal statement, see the ARM
(naive) and the C++ Working Paper (attempts to be precise).
Examining those naive and hidden assumptions is important.

Helps in design too. Whenever I "feel" a piece of code
ought to work but I can't explain exactly why -- guess where
the bug pops up?

--
JOHN (MAX) SKALLER, INTERNET:max...@suphys.physics.su.oz.au
Maxtal Pty Ltd,
81A Glebe Point Rd, GLEBE Mem: SA IT/9/22,SC22/WG21
NSW 2037, AUSTRALIA Phone: 61-2-566-2189

Henry G. Baker

unread,
Nov 26, 1994, 2:23:49ā€ÆPM11/26/94
to
In article <3b7pd3$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
>In article <CzuxH...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>>Others, like constraining a structure to only take references it can
>>own, would require communication from the programmer, though I'd love to
>>be shown wrong.
>
>I think that future programming languages will have constructs for compile-time
^^^^^^^^^^^^^^^^^^^^^^^^^^^

>behavior as well as run-time behaviour. The compile time behavior will be a
>superset of what is handled now with makefiles, macros, preprocessors, etc.
>Through code that executes at compile-time, developers will be able to specify
>exactly how their classes and objects can be used. Given such objects and
>metaobjects, I think such behaviour is possible... Right now, probably not...

Traditional Lisp macros, compiler-macros and advisors can already
fulfill some of these roles. It would be nice if we could better
formalize some of these tasks.

>Waitasec... The implication here is that GC is and will always be slower than
>hand-coding all the deletes. That's just not true.
>
>1. A collector deallocates a large number of objects all at once. Hand-coded

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This depends upon the collector, but is generally true.


> deletes happen 1 at a time. This makes a difference when writing the
> memory manager. The deallocation costs for a good collector are minimal.
> (e.g., copying + non-copying collectors, implicit reclamation, etc.)
> The deallocation costs for a good explicit memory manager are about the
> same (or slightly higher) as the allocation costs.

'Finalization' is becoming a more and more important issue, so the
interaction of GC and finalization needs to be clarified. With explicit
deallocation, the timing of finalization is usually better defined.

>2. Deallocation is part of the critical path when using explicit memory
> management. A spreadsheet recalc that allocates a lot of temporary
> memory is going to have to deallocate that memory. And the user is going
> to have to wait through that deallocation. A collector can defer collection
> to application idle time, further decreasing the cost of deallocation.

There's no reason why 'lazy' and/or parallel deallocation can't be done
with explicit deallocation.

>4. For short-lived processes, a collection might not be required at all.
> This dynamic optimization isn't available to applications that hard-code their
> deletes.

You have just made a pretty good argument for more tailored memory
management strategies.

>5. Collectors can be made parallel. Parallel collectors can run on separate
> processors, with the appropriate speed benefits for otherwise singlethreaded
> applications.

Other kinds of memory management schemes can also be made parallel.
'Decomposers' can easily be run as a separate thread -- especially
in the context of 'lazy' deallocation.

>6. Collectors don't need to be black boxes for speed-hungry developers. There
> are many collectors with very flexible interfaces. Developers can use their
> knowledge of a collector's operation to further tune their application. And
> a good collector should be flexible enough to coexist with an explicit memory
> manager.

Good idea. Now how to develop good, high-level protocols for communication
with an automatic resource manager.

>But reguardless of where you draw the line, I suggest that most applications today
>would benefit from the use of GC.

Although much work has been done on GC for 'distributed' applications,
I don't think that the benefits of GC for these applications have been
so clearly demonstrated yet.

John Ellis

unread,
Nov 26, 1994, 5:51:07ā€ÆPM11/26/94
to
John Max Skaller writes:

See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.

Just a minor correction: We designed a safe subset that would work
with a wide range of garbage collection algorithms, not just
conservative algorithms.

Erik Naggum

unread,
Nov 26, 1994, 6:20:18ā€ÆPM11/26/94
to
[John Ellis]

pardon me for asking the seemingly obvious, but where would I find this
paper?

#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

Kevin Warne

unread,
Nov 26, 1994, 6:35:38ā€ÆPM11/26/94
to
In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
>There's no reason why 'lazy' and/or parallel deallocation can't be done
>with explicit deallocation.

Unfortunately the code required to signal 'delete me please' takes
about as long to execute as does the average case deallocation.
(Assuming the presence of a slick memory manager that will delete objects
in ~ 15 cycles) With a collector this overhead doesn't exist.


>You have just made a pretty good argument for more tailored memory
>management strategies.

There are many different techniques for creating and destroying objects.
Using a GC adds a few techniques. By having access to and using both
automatic and explicit memory managers, developers should be able to
create better applications as a result.


>Although much work has been done on GC for 'distributed' applications,
>I don't think that the benefits of GC for these applications have been

I think there's a lot more work to do to create decent environments
for creating distributed applications. Making a distributed GC work
well would be one step. It's so easy in a distributed environment to
orphan processes and objects, it'd be nice to have those partitioned
objects to take care of themselves.

Scott McLoughlin

unread,
Nov 26, 1994, 3:59:26ā€ÆPM11/26/94
to
kev...@whistler.com (Kevin Warne) writes:

> Contrast this to existing C++ practice... The programmer (the gov't in
> this example) has to set out a global allocation policy and manually
> trace the lifespan of each object... If a class is reused in a
> second application then this lifespan analysis has to be redone
> and allocations retraced.
>

Howdy,
Well put. I think it might be important here on comp.lang.c++
to emphasize that GC is not just (1) lazy programming or even
(2) useful for implementing certain algorithms. (Both might be true
for specific cases).
GC, i.e. "automatic storage management", is a key player
in how easily system modularization and code reuse is achieved. In
my experience at various run-o-the-mill coding shops, modularization
and reuse has often broken down over issues of resource management.
For these non-rocket scientists, GC would have been a god send and
would have helped them leap the barrier between OO hype and practice.
OTOH, I do not think C/C++ should be retrofitted with
GC - breaks the spirit of the langauges. Which leaves a burning
question.....

=============================================
Scott McLoughlin
Conscious Computing
=============================================

Message has been deleted

Rick Busdiecker

unread,
Nov 27, 1994, 9:20:52ā€ÆPM11/27/94
to Kevin Warne
In article <3b7pd3$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:

I think that future programming languages will have constructs for
compile-time behavior as well as run-time behaviour. The compile
time behavior will be a superset of what is handled now with
makefiles, macros, preprocessors, etc. Through code that executes
at compile-time, developers will be able to specify exactly how
their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now,
probably not...

What you are describing is not `future programming languages'. Such
languages exist. One, called Common Lisp, was actually getting a fair
amount of use a decade ago. Unfortunately, most people gave up on it
right about the time that significant improvements in compiler and
garbage collection technology were becoming standard for conventional
architecture Common Lisp implementations.

Code is data. Data is code. Lisp leverages this fact while glorified
assembly languages hide it. You should not need seperate languages
(such as preprocessor-speak) to define new code transformations.

--
Rick Busdiecker <r...@lehman.com> Please do not send electronic junk mail!
Lehman Brothers Inc.
3 World Financial Center "The more laws and order are made prominent, the
New York, NY 10285-1100 more thieves and robbers there will be." --Lao Tzu

Andrew Main

unread,
Nov 29, 1994, 3:17:50ā€ÆPM11/29/94
to
In article <CzxA...@rheged.dircon.co.uk>,
si...@rheged.dircon.co.uk (Simon Brooke) writes:
>Look, we live in a world in which,
>
>(i) according to all the trade comics, there is a huge backlog of
>code waiting to be written (although this may be because you get more
>kudos for specifying than for coding...);

That's my experience too. It's sad.

>(ii) the average C (or similar language) programmer spends more time
>debugging than coding (*);

Often true, but not by as large an extent as you might think.

>(iii) the overwhelming majority of bugs in C (or similar language)
>programs are problems with memory allocation (walking off the end of
>the world; early deallocation; leaks) (*);

Not if the programmer is careful. I hardly ever have this type of bug.

{...}
>((*) Asterisks indicate bald assertions: I can't cite anything to back
>these up, but it's my experience).
{...}
> But in
>these days of megabytes of core, hand crafting (for example) memory
>management is at best an almost masturbatory vanity, and at worst a
>gross waste of resources (since the time you waste getting the bloody
>thing to work is far more valuable than the totality of the processor
>time it will ever save).

Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.

>Using automatic store management of whatever kind removes the single
>major cause of error in code, and allows the programmer to concentrate
>on the logic of the program. It thus greatly speeds application
>development, and enhances application reliability.

As I've indicated above, memory management only a major cause of errors
if the programmer is sloppy. For good programmers, it's just another
arena for bugs to display themselves in.

>There is a place, of course, for low level languages like C:

C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.

> These
>things will be used by millions of users and will be required to run
>for long periods perfectly reliably. Such programs are rare,
>thankfully, because producing code of this quality is hard and far
>fewer people can do it than think they can.

Ideally, EVERY program would operate reliably for long periods of time.
Every programmer should aim for that, and not just when writing kernels.

>Building code is an engineering problem. Suppose you are building a
>vehicle for a speed record attempt, you might very well argue 'right,
>we'll weld this assembly by hand, because the very best craftsmen can
>work to closer tolerances than a robot can, and this assembly is
>critical'. However, if you were production engineering the next Ford
>Mondeo, and you suggested hand welding the body shell, you'd be out of
>a job.

A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.

-andrew

Peter Kron

unread,
Nov 30, 1994, 10:39:44ā€ÆPM11/30/94
to
From: si...@rheged.dircon.co.uk (Simon Brooke)

> Look, we live in a world in which,
>
> (i) according to all the trade comics, there is a huge
> backlog of code waiting to be written (although this may
> be because you get more kudos for specifying than for
> coding...);
>
> (ii) the average C (or similar language) programmer spends
> more time debugging than coding (*);
>
> (iii) the overwhelming majority of bugs in C (or similar
> language) programs are problems with memory allocation
> (walking off the end of the world; early deallocation;
> leaks) (*);
>
> (iv) silicon is cheap and rapidly getting cheaper;
>
> (v) programmers are expensive and steadily getting more
> so.

>
> ((*) Asterisks indicate bald assertions: I can't cite
> anything to back these up, but it's my experience).

Very well put. Add to (iii)...more so if weighted by cost.
---
NeXTMail:peter...@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022

Marco Antoniotti

unread,
Dec 1, 1994, 9:58:49ā€ÆAM12/1/94
to
In article <3bg29e$g...@holly.csv.warwick.ac.uk> cs...@csv.warwick.ac.uk (Andrew Main) writes:


From: cs...@csv.warwick.ac.uk (Andrew Main)
Newsgroups: alt.lang.design,comp.lang.c,comp.lang.c++,comp.lang.lisp
Date: 29 Nov 1994 20:17:50 -0000
Organization: University of Warwick, Coventry, UK
Lines: 76
NNTP-Posting-Host: holly-fddi.csv.warwick.ac.uk
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

...

Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.

^^^^

C/C++ aren't appropriate for most applications either :)

...

C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.

...

As do most serious Common Lisp implementations.

...

A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.

That is correct. I am just now writing a CL library that does need
explicit memory management for a lot of data structures I build "on
the fly", but that cannot be collected since they get memoized in a
has table. I can build it easily relying on the garbage collector on
as "as needed" basis.

Happy Lisping

Stephen Benson

unread,
Dec 3, 1994, 6:09:19ā€ÆAM12/3/94
to

In article <1994Dec01....@corona.com>, Peter Kron (pk...@corona.com) writes:
>From: si...@rheged.dircon.co.uk (Simon Brooke)
>> Look, we live in a world in which,
.
.
.

>> (v) programmers are expensive and steadily getting more
>> so.

Not if you buy them in India or parts of eastern Europe. Or many other non-US,
non-Euro countries. (Read _The Decline of the American Programmer_ or whatever
it's called)

(They're paid pretty damn badly in the UK too -- but then just about everyone
is)

--
: stephen benson : : : : : : : : step...@scribendum.win-uk.net

Erik Naggum

unread,
Dec 3, 1994, 5:14:02ā€ÆPM12/3/94
to
[Simon Brooke]

| Look, we live in a world in which,

:


| (v) programmers are expensive and steadily getting more
| so.

[Stephen Benson]

| Not if you buy them in India or parts of eastern Europe. Or many other
| non-US, non-Euro countries. (Read _The Decline of the American
| Programmer_ or whatever it's called)

Edward Yourdon: Decline and Fall of the American Programmer, ISBN 0-13-203670-3

last I heard, as in: when I got loads of mail from several Indian companies
thinking I'd like use their cheap Windoze programming services just because
my company has "software" in its name, they are inexpensive all right, but
also getting more expensive much faster than their American counterparts.
must be the rise in demand.

Yourdon basically claims that since they got into the programming business
much later than American programmers, they use CASE tools, which is
Yourdon's favorite solution to the software crisis. there is probably some
truth to his claims, and from what I have seen of the C++ world, it is nigh
impossible to write C++ code without a machine doing the work for you.

I have yet to see comparisons of productivity between a CASE/C++ programmer
vs a LISP programmer, but they claim five times increase in productivity,
and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
and ten times increase in productivity with LISP. such figures are
probably way too round to be accurate, but if there's truth to them, you'd
still be way ahead with a real programming language compared to a crippled
language, however good the crutches.

#<Erik>
--
The check is in the mail. This won't hurt. You'll find it on the Web.

Andrew Koenig

unread,
Dec 4, 1994, 11:06:50ā€ÆAM12/4/94
to
In article <19941203T2...@naggum.no> Erik Naggum <er...@naggum.no> writes:

> Yourdon basically claims that since they got into the programming business
> much later than American programmers, they use CASE tools, which is
> Yourdon's favorite solution to the software crisis. there is probably some
> truth to his claims, and from what I have seen of the C++ world, it is nigh
> impossible to write C++ code without a machine doing the work for you.

Really? That's news to me.

> I have yet to see comparisons of productivity between a CASE/C++ programmer
> vs a LISP programmer, but they claim five times increase in productivity,
> and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
> and ten times increase in productivity with LISP. such figures are
> probably way too round to be accurate, but if there's truth to them, you'd
> still be way ahead with a real programming language compared to a crippled
> language, however good the crutches.

The closest thing I've seen to a believable study was by Howard Trickey,
who implemented a fairly substantial program (O(10K lines)) in Lisp and
C++ simultaneously and took notes. His conclusion was that in that
particular context, choice of language didn't affect his productivity much.

Yes, it's only one data point and yes one can find all kinds of flaws
in the methodology, but I haven't seen any evidence that isn't at least as flawed.
--
--Andrew Koenig
a...@research.att.com

Giuliano Procida

unread,
Dec 5, 1994, 8:55:21ā€ÆAM12/5/94
to
If you are into this sort of thing, you may be interested in:

"Haskell vs. Ada vs. C++ vs. Awk vs. ... An Experiment in Software
Prototyping Productivity" by Paul Hudak and Mark P. Jones,

Available via anonymous ftp as:
ftp://nebula.cs.yale.edu/pub/yale-fp/papers/NSWC/jfp.ps

... in which there are also references to the original write-up of the
prototyping experiment.

Giuliano.

Lawrence G. Mayka

unread,
Dec 5, 1994, 8:55:53ā€ÆAM12/5/94
to

I am obliged to point out that, given AT&T's very close past and
present relationship with C++ and strong private and public commitment
to it, any refutation of Mr. Trickey's ancient memorandum and
Mr. Koenig's comments above, including any contrary evidence, would
very likely be stamped AT&T Proprietary--Not for Public Disclosure.
AT&T employees cannot be considered unbiased, unconstrained
participants in discussions of this kind. I can only suggest that
readers learn CLOS and C++ for themselves, try both languages on
appropriate and significant applications, making use of development
environments appropriate to each, and then come to their own
conclusions.

If this thread is to continue at all, I suggest that it not be
cross-posted to widely disparate newsgroups.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Andrew Koenig

unread,
Dec 5, 1994, 12:27:09ā€ÆPM12/5/94
to
In article <LGM.94De...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

> I am obliged to point out that, given AT&T's very close past and
> present relationship with C++ and strong private and public commitment
> to it, any refutation of Mr. Trickey's ancient memorandum and
> Mr. Koenig's comments above, including any contrary evidence, would
> very likely be stamped AT&T Proprietary--Not for Public Disclosure.
> AT&T employees cannot be considered unbiased, unconstrained
> participants in discussions of this kind.

No one is unbiased or unconstrained.

For the record, I will say that if a refutation of Trickey's paper
exists, I have not seen it. Moreover, I have no reason to believe
one exists, if only because it is hard to refute experience.

I will also say that I work on C++ because I believe it is a
worthwhile endeavor, not because anyone is forcing me to do so.
And it is *far* from the only programming language I speak fluently.
--
--Andrew Koenig
a...@research.att.com

Clint Hyde

unread,
Dec 9, 1994, 12:49:16ā€ÆPM12/9/94
to
In article <3b7s28$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
--> In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:

--> >system. One need only look to human economic systems to see that
--> >'private ownership' of resources can often be considerably more
--> >efficient in the utilization of the available resources than can
--> >'system' (i.e., governmental) management.
-->

I don't think this is a good argument. let us suppose that every
citizen of town A says "I can do a better job of aquiring resources
[water, electricity] and disposing of them [sewage] than anyone else,
including the gov't", so that every citizen then has his/her own water
intake and sewage exhaust system.

you then have totally incompatible mechanisms, everyone is competing
with each other in a variety of ways. there cannot be any common
infrastructure, except perhaps by accident, so the only solution you can
have where everyone's method doesn't conflict is delivered bottled water
and septic tank removal, and gasoline/propane electricity generators.

suppose that every place you lived in your life had completely different
electrical systems, some AC, some DC, various voltages and frequencies
(for the AC locations). you'd have to replace all your electrical
appliances every time you moved, or you'd have to buy a variety of
power-converters, or accept restrictions on where you could go. you'd
have limited shopping, because you'd either have to buy more power
converters in order to shop at store X because they carry 94-Hz 66-volt
AC, and your house is only working with 150Hz 105-volt AC (except the
parts that are various DC voltages). you'd end up with no uniform
electrical code for construction, resulting in substantial safety
hazards (the average person doesn't/wouldn't know enough about
electrical safety to always do the right thing).

I believe the present mechanism of city/county ogranized and controlled
water/sewer systems is actually the right way to go. any basic
infrastructure like that (I'd include GC in this, though it's a
different domain) should be created and operated as a benign monopoly,
the way they are now.

GC should be exactly the same across the board, so that you don't have
problems about combining two different things that have different
built-in GC'ers, and so that you can tune globally, and learn one common
mechanism.

I also like the idea of being able to free single objects explicitly. I
generally know exactly when many of my objects can be destroyed/GC'd.

on the LispM you could control this better with using the consing areas.
I've forgotten how now, but it was something like:

(with-cons-area (temp-area)
(do-various-things)
(copy-result-out-of-temp-area)
) ;de-allocate area

-- clint

Henry G. Baker

unread,
Dec 12, 1994, 10:04:27ā€ÆPM12/12/94
to
In article <3ca5as$i...@info-server.bbn.com> Clint Hyde <ch...@bbn.com> writes:
>In article <3b7s28$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
>--> In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
>
>--> >system. One need only look to human economic systems to see that
>--> >'private ownership' of resources can often be considerably more
>--> >efficient in the utilization of the available resources than can
>--> >'system' (i.e., governmental) management.
>
>I don't think this is a good argument. let us suppose that every
>citizen of town A says "I can do a better job of aquiring resources
>[water, electricity] and disposing of them [sewage] than anyone else,
>including the gov't", so that every citizen then has his/her own water
>intake and sewage exhaust system.
>
>you then have totally incompatible mechanisms, everyone is competing
>with each other in a variety of ways. there cannot be any common
>infrastructure, except perhaps by accident, so the only solution you can
>have where everyone's method doesn't conflict is delivered bottled water
>and septic tank removal, and gasoline/propane electricity generators.

By next Jan. 1, California residents will (finally) have the option of
going to someone other than Pactel or GTE for their intrastate
telephone calls. In another 3 years or so, we will have the capability
of choosing our own _electricity_ supplier (so-called 'wheeling'). Now
if we could also choose our own water supplier, then we'd be in business.

These systems use compatible equipment, but compete on different
kinds of services and competing sets of managment. I expect costs
to go down, and conservation to go up.

In another year or so, there's going to be wholesale bloodletting in
the high-speed internet access business. I expect T1 speeds (1Mbit)
to the home to quickly become standard, with prices about what people
are currently paying. Between the telephone cos., the cable cos., the
cellular cos., and direct satellite broadcast (for Usenet), bandwidth
to the home will no longer be a problem. This would never happen
without competition, not in a million years.

This is a perfect example of where it is better to drastically expand
the supply, than to waste very much time and effort to optimize a
resource that is grossly inadequate.

----

Nearly every resource is modestly substitutable for other resources,
and only a market system is capable of making these tradeoffs in a
distributed way. Insofar as GC must be centrally controlled, it
will never make it out of the single-processor environment.

I like GC, but we must find ways of taming its exclusionary attitude
towards all other methods of resource management.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hb/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hb/hbaker/home.html
************* Note change of address ^^^ ^^^^

(It _is_ accessible, but Netcom is loaded; keep trying.)


Richard Billington

unread,
Dec 15, 1994, 10:39:04ā€ÆAM12/15/94
to
Dick Gabriel said that Lucid's experience with developing their
"C" tools (Energize), their experience indicated at least a 3 fold
difference between using C and Lisp as development languages - the
increased being in lisp's favour (i.e., productivity was 3 to 1 improved
if one used lisp). I agree with LGM, this proves nothing, but is simply
some heresay which is contrary to Mr. Trickey's heresay.
--
Richard Billington
bu...@cc.gatech.edu

Erik Naggum

unread,
Dec 15, 1994, 3:49:09ā€ÆPM12/15/94
to
[Richard Billington]

| I agree with LGM, this proves nothing, but is simply some heresay which
| is contrary to Mr. Trickey's heresay.

I have read Howard Trickey's report in SIGPLAN 23(2). "heresay" is a good
term to cover the article. there is not enough information in this paper
to conclude anything, not even restricted to the specific case under study.
the lack of rigor in the presentation of both arguments and conclusion, the
lack of _relevant_ detail in the description of each of the versions of the
program, and the unwarranted generalizations individually and together make
this paper utterly worthless. it is also very, very old, and while its
relevance may have been non-zero at the time it was written, today it is
not worth the paper it was printed on.

according to the author's own data, he has 42KLOC experience in procedural
languages (Pascal, C, PL/I, and PDP/11 assembly), and 3KLOC experience in
Franz Lisp. "Of course one reason why the two versions are about the same
size is that my desire to have identically behaving programs gave a strong
incentive to look at one version when coding the other. Perhaps a True
Lisper has a completely different, more compact coding style. ... I don't
think there are any places where one language would have dictated a vastly
different design than the other." the author is terminally confused on the
difference between language and implementation. "However, for finding
syntax errors, C++ has an [sic] big advantage. If a 500 line C++ file has
errors, you will find out about all of them (up to a limit) in about 20
seconds. The Lisp compilation aborts after he first error, so it can take
many minutes to find them all." this author is also hung up in the time
spent in garbage collection, without giving any kind of statistics for the
largely hidden memory allocation costs of C++. "A rather bigger graph took
109 seconds in C+= and 805 seconds in Lisp, because Lisp needed 16 garbage
collections to finish." the single most damaging aspectd of garbage
collection might indeed be that systems report the time spent in garbage
collection separately. a telling comparison is the following: "If you bind
new Lisp locals where logical, a function that is easily comprehensible in
C++ looks too long in Lisp." a fundamental difference between the two
languages is precisely how and when you create functions. the author
dismisses macros, or what I understand to be macros, anyway: "Perhaps a
Lispa advocate would counter that it isn't difficult to customize the Lisp
syntax to whatever one desires. My experience is that this isn't a good
idea, because tools like the debugger operate on a more canonical
representation so it is best to be familiar with your program in "standard"
Common Lisp. It is also to the benefit of other readers not to use a
strange syntax." it may seem that the author has failed to do the simplest
thing in comparing languages and compilers: turn on optimization. "The
compiler flags for both versions were as I set them for normal development:
-g for C++ and the defaults for Lisp. The C++ version could be sped up
using optimization (-O), an the Lisp version could be sped up using
`fast-entry' (avoids checks for the correct number of arguments)." the
author uses only a small fraction of the Lisp primitives: "It took me about
two weeks to design and implement C++ things that were primitives in Lisp;
after that, the added repertoire of Lisp didn't make much of a difference."

I conclude that we have a C/C++ pro and a Lisp amateur comparing apples and
oranges, and coming out with what he knows best. zero information value.
worse yet, those who refer to this paper must do so for political reasons,
or marketing, which amounts to the same thing, and they can hardly be
intellectually honest about it.

Clint Hyde

unread,
Dec 16, 1994, 12:23:11ā€ÆPM12/16/94
to
In article <hbakerD...@netcom.com> hba...@netcom.com (Henry G. Baker) writes:
--> In article <3ca5as$i...@info-server.bbn.com> Clint Hyde <ch...@bbn.com> writes:
--> >In article <3b7s28$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:

--> >--> In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
--> >
--> >--> >system. One need only look to human economic systems to see that
--> >--> >'private ownership' of resources can often be considerably more
--> >--> >efficient in the utilization of the available resources than can
--> >--> >'system' (i.e., governmental) management.
--> >
--> >I don't think this is a good argument. let us suppose that every
--> >citizen of town A says "I can do a better job of aquiring resources
--> >[water, electricity] and disposing of them [sewage] than anyone else,
--> >including the gov't", so that every citizen then has his/her own water
--> >intake and sewage exhaust system.
--> >
--> >you then have totally incompatible mechanisms, everyone is competing
--> >with each other in a variety of ways. there cannot be any common
--> >infrastructure, except perhaps by accident, so the only solution you can
--> >have where everyone's method doesn't conflict is delivered bottled water
--> >and septic tank removal, and gasoline/propane electricity generators.

predictably, I wasn't quite specific enough...

--> By next Jan. 1, California residents will (finally) have the option of
--> going to someone other than Pactel or GTE for their intrastate
--> telephone calls. In another 3 years or so, we will have the capability
--> of choosing our own _electricity_ supplier (so-called 'wheeling'). Now
--> if we could also choose our own water supplier, then we'd be in business.

this is really good. my question is: who owns, installs, and maintains
the physical infrastructure? i.e., the power lines, the water lines, the
sewer pipes? AT&T installed most of the telephone lines around the
country prior to the breakup (and the baby bells since then).
effectively, they own the lines. (well, they used to)

what happens when a new provider arrives?

my argument is that there MUST be a common standard, set by the gov't,
and there MUST NOT be any installation of competing infrastructure
hardware: i.e., if there are two electric utilities, you don't get two
different sets of power lines down your street. if there were two water
utilities, or one day a new one arrived, you don't want them ripping up
the streets to install *their* pipes to the houses that want to be their
customers...

--> These systems use compatible equipment, but compete on different
--> kinds of services and competing sets of managment. I expect costs
--> to go down, and conservation to go up.

in fact, these systems use the *same* equipment (other than phone
switches). your whole area is on the same power grid, and the gov't
requires power companies to be exactly in phase on that grid--it's the
same set of power lines.

I am suggesting that GC should exist in much the same way--i.e., the
individual application author SHOULD NOT be in the business of using
some randomly selected memory allocator. *I* don't want to have to do
that, I haven't heard good things about any one of them other than the
good GCs in modern lisps. I (used to) read regularly about this/that/
the-other malloc variant available as a C library, each one claiming
that theirs is better than the others.

--> In another year or so, there's going to be wholesale bloodletting in

maybe. sometime or other, almost certainly. 95/96? I doubt it.

--> the high-speed internet access business. I expect T1 speeds (1Mbit)

T1 into the home? I'd guess not this decade, not via the existing
infrastructure. the demand will be way too low.

--> to the home to quickly become standard, with prices about what people
--> are currently paying. Between the telephone cos., the cable cos., the
--> cellular cos., and direct satellite broadcast (for Usenet), bandwidth
--> to the home will no longer be a problem. This would never happen
--> without competition, not in a million years.

telephone and video cable are two totally different infrastructures. if
repair is needed, someone comes and digs up your yard.

cellular and DBS have no physical infrastructure that exists over or
under public thoroughfares--a very different situation. they cost a good
bit more. maybe that will change.

I can see T1-volume traffic through a DBS connection, or a cellular one,
but my experience with cellular is that it's scratchy, which means the
packet retransmit rate would be high. scratchy degradation is ok on tv
signals, because noise in that kind of signal is fairly trivial.

I'd suggest that the increased bandwidth would occur no matter what, but
certainly not at the same rate as having competition causes. it's also
driven by the unexpected existence of new demand...such as the explosion
of the home computer market. without that, we probably would have
nothing beyond voice-grade hard-wired and cellular telephony.

--> This is a perfect example of where it is better to drastically expand
--> the supply, than to waste very much time and effort to optimize a
--> resource that is grossly inadequate.

I think the existing telephone infrastructure is inadequate for
T1-quality transmission. the cost of upgrading/replacing it would be
ENORMOUS. I'd say that it's a safe conclusion it won't happen. so an
alternative is necessary. if that alternative is video cable, or RF,
both of those will be found to become inadequate at some point.
personally, I favor things that have no physical infastructure, as they
don't involve hugely expensive retrofits.

--> Nearly every resource is modestly substitutable for other resources,

not true. water does not substitute for electricity.

--> and only a market system is capable of making these tradeoffs in a

the market makes the tradeoff between suppliers, not necessarily between
different resources. when the resources are reasonably substitutable,
the tradeoff is between suppliers.

if I wished, I could disconnect my house from the power grid, and
substitute a gasoline-burning power generator. I'd then have a different
supplier of my electricity. no infrastructure change required, other
than some electrical connections at the breaker-box inside the house.
and it would not even be remotely efficient in comparison.

but I am not able to substitute some other thing for electricity.

I can make two phone calls and switch my trash collector. in fact, I did
so in 92. saved $5 per month, and got one that did more recycling.

--> distributed way. Insofar as GC must be centrally controlled, it
--> will never make it out of the single-processor environment.

I don't agree. I don't know how to improve things, but I don't think
that GC/memory-allocation should be managed separately and differently
by every single application AND the OS. I think *THAT* is what won't
make it...

--> I like GC, but we must find ways of taming its exclusionary attitude
--> towards all other methods of resource management.

is it exclusionary? I don't know.

I believe that the individual programmer should not have to deal with
memory allocation.

how often do you do anything about your operating system's GC/allocation
mechanism? PC and Mac users have to worry about it moderately often,
although the newest machines can hold enough RAM that it's less often a
problem than it used to be...

-- clint

btw, for someone who (perhaps) suggested otherwise, I vote Libertarian.

Cyber Surfer

unread,
Dec 16, 1994, 2:31:24ā€ÆPM12/16/94
to
In article <19941215T2...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> dismisses macros, or what I understand to be macros, anyway: "Perhaps a
> Lispa advocate would counter that it isn't difficult to customize the Lisp
> syntax to whatever one desires. My experience is that this isn't a good
> idea, because tools like the debugger operate on a more canonical
> representation so it is best to be familiar with your program in "standard"
> Common Lisp. It is also to the benefit of other readers not to use a
> strange syntax." it may seem that the author has failed to do the simplest

There's a simple solution: document the macros and treat them as
additions to the language. Even C programmers do that! How else
can you use tools like lex, yacc, or embedded SQL? Someone had to
document these tools.

Forth, Lisp, and even Smalltalk programmers can look at the language
system they use for programming in a slightly different way, by seeing
that the system itself is a program written in the language it compiles.
(See PJ Brown's book Writing Interactive Language Compilers and
Interpreters for a definition of "compiler" or "interpreter".)

I often suspect that mistakes like these could be unfamiliarity
with compiler theory.
--
CommUnity: ftp://ftp.demon.co.uk/pub/archives/community
Me: http://cyber.sfgate.com/examiner/people/surfer.html

Message has been deleted

William Paul Vrotney

unread,
Dec 19, 1994, 12:39:31ā€ÆAM12/19/94
to
In article <D0xAI...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:

> In article <BUFF.94De...@pravda.world>,

> Well, cf Erik Naggum's response to the above, Dick Gabriel in
> particular and the Lucid team in general must be taken as LisP
> experts, so that may to some extent weight against their experience as
> reported above. Nevertheless, if this statement is something Dick has
> published and provided some quantifiable support for, it would be
> extremely useful material. There is so *damn* *little* serious study
> of the comparative productivity of differing programming tools.

It seems like computer science is stuck with surveys here. Is there any
more scientific approach?

>
> No, I'm not expecting any study which "conclusively proves" that any
> particular language is "the best" for any but very special purposes
> (and indeed I'd take with a large pinch of salt any study which
> claimed to); but it's useful to hear from people who have a wide range
> of expertise in more than one language.
>
> My own view, for what it's worth, is that LisP is probably more like
> eight or ten times as productive as C, in terms of delivered user
> level functionality per programmer hour (I don't know enough C++ to
> comment). However I'm biased towards LisP; although I used BCPL before
> LisP, LisP is far and away my preferred language. My experience runs
> to about 80KLOC in C, probably five times that in various LisPs (but
> mainly Interlisp/LOOPS).
>

This is close to my experience. I have mentioned that there feels like a 5
to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
experts in both Lisp and C++ and the usual response is that it is too
conservative an estimate. One part of the big ratio is attributed to the
lack of a good C++ library. I created a Lispy C++ library for myself which
includes lists, dynamic typing and other nice Lispisms, and this helped
considerably but the productivity ratio is still pretty high. I think an
even bigger part of the ratio is due to the lack of the ability of C++ to do
what is referred to as exploratory programming. If exploratory programming
is important to developing the application then this makes the ratio really
high. For example, in developing my Go program in Lisp the ability to do
exploratory programming feels like it shoots up the ratio to more like 100
to 1, seriously!

This leads me to believe that for a complex enough application, like a Go
program, it is better to develop it in Lisp then recode in C++ in the last
two weeks before delivery. Or better yet compile efficiently (enough) in
Lisp when that becomes more feasible.

It is interesting to muse with the powerful software automation of the
"software ICs" paradigm, such as NEXTSTEP, in the future perhaps the only
interesting programming will be consuming software ICs, producing software
ICs and otherwise exploratory programming. Contrary to popular opinion,
perhaps Lisp (or Lisp like languages) will be one of the few surviving human
written programming languages of the future.
--
William P. Vrotney - vro...@netcom.com

Nick Mein

unread,
Dec 19, 1994, 8:05:53ā€ÆPM12/19/94
to

Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
Vogt have written that programming in Lisp is 2 - 10 time more productive
than programming in C (two of the posters professed to having little
experience with C++). Surprisingly, no suggestion is made that this
productivity advantage is restricted to any particular problem domain (perhaps
Lisp is 10x more productive for an NLP system, but only 2x more productive
if you are writing a Unix device driver?).

Anyway, I found the claims rather surprising, so here is a challenge. The
following toy program loads a 256x256 8 bit grayscale image from a file,
inverts it, and saves it to another file. I consider myself to be an
intermediate-level C++ programmer. The program took me an hour (almost to
the minute) to design, code, debug and test (Ok, I didn't test it very much).
I look forward to seeing the previous posters' Lisp implementations of the
same. It should only take them 6 - 30 minutes to knock them together, less
if they are reasonably experienced in Lisp.


// Toy C++ program that reads 256x256 gray-scale image from a file,
// and writes out the "negative" of the image.

#include <stdio.h>
#include <assert.h>

#define MEM_ALLOC_ERROR -1
#define FILE_OPEN_ERROR -2
#define FILE_READ_ERROR -3
#define FILE_WRITE_ERROR -4

const int N = 256;

class Image
{
public:
Image();
~Image();
int LoadImage(char* filename);
int SaveImage(char* filename);
void Invert();

private:
int width;
int height;
unsigned char* data;
};

Image::Image()
{
data = NULL;
};

Image::~Image()
{
delete data;
}

int Image::LoadImage(char* filename)
{
FILE* f;

assert(data == NULL);

if ((f = fopen(filename, "rb")) == NULL)
return FILE_OPEN_ERROR;

width = N;
height = N;

if ((data = new unsigned char[(size_t)width * height]) == NULL) {
fclose(f);
return MEM_ALLOC_ERROR;
}
if (fread(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
delete data;
fclose(f);
return FILE_READ_ERROR;
}
fclose(f);
return 0;
}

int Image::SaveImage(char* filename)
{
FILE* f;

assert(data);

if ((f = fopen(filename, "wb")) == NULL)
return FILE_OPEN_ERROR;

if (fwrite(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
fclose(f);
return FILE_WRITE_ERROR;
}
fclose(f);
return 0;
}

void Image::Invert()
{
unsigned char* p;

assert (data);

for (p = data; p != data + width * height; p++)
*p = 255 - *p;
}

int main(int argc, char* argv[])
{
Image image;

if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);
return -1;
}
if (image.LoadImage(argv[1]) != 0) {
fprintf(stderr, "Error loading image from file %s\n", argv[1]);
return -1;
}
image.Invert();
if (image.SaveImage(argv[2]) != 0) {
fprintf(stderr, "Error saving image to file %s\n", argv[2]);
return -1;
}
return 0;
}

--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.

Marco Antoniotti

unread,
Dec 20, 1994, 1:11:00ā€ÆAM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz> nm...@bifrost.otago.ac.nz (Nick Mein) writes:

--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.

20 Minutes. I do not touch type. I tried to write a "multidimensional"
'read-sequence' (a function with that name will be in the upcoming CL
ANSI, the first ANSI OO Language standard), decided it was not worth
the effort, went back to the original version, corrected a few errors
(there are still a few around), checked CLtL2 a couple of times
(actually more) and looked up some documentation strings on my
faithful AKCL.

Pretty good, isn't it? :)

Given that I do not touch type (and I am *really* slow at typing), my
personal speedup is about 4x, give or take.

Moreover, the CL program does more than the C++ program and ILISP
under Emacs makes it very easy to modify one function at a time and
reload and recompile it in the Lisp system without recompiling the
whole file. I did not use this feature now, but it is definitively an
extra speedup for larger projects.

Happy Lisping

Marco Antoniotti

PS. For the record, I am fluent in C, C++ and Ada.

-------------------------------------------------------------------------------
;;; -*- Mode: CLtL -*-

(defconstant +N+ 256)

(defclass image ()
((data :accessor image-data
:initform (make-array (* +N+ +N+)
:initial-element 0
:element-type '(unsigned-byte 8)))))


(defgeneric load-image (image filename))

(defgeneric save-image (image filename))


(defmethod load-image ((im image) (filename string))
(with-open-file (in-stream filename
:direction :input
:element-type '(unsigned-byte 8)
:if-does-not-exist :error)
(read-sequence (image-data im) in-stream (* +N+ +N+))))


(defmethod save-image ((im image) (filename string))
(with-open-file (out-stream filename
:direction :output
:element-type '(unsigned-byte 8)
:if-does-not-exist :create
:if-exists :supersede)
(write-sequence (image-data im) out-stream (* +N+ +N+))))


;;; 'fread' is a C library function that does 'buffered' input. The
;;; upcoming CL ANSI standard will have a counterpart called
;;; 'read-sequence'.
;;; I could have looked up the way AKCL or CMUCL interface with the C
;;; library, but I decided to just write a stupid version of it.

(defun read-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(setf (aref array-data i) (read-byte stream t))))

;;; Same comments.

(defun write-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(write-byte (aref array-data i) stream)))


(defgeneric invert (image))

(defmethod invert ((im image))
(with-slots (data) im
(dotimes (i (* +N+ +N+))
(setf (aref data i) (- 255 (aref data i))))))


(defun test (infile outfile)
(let ((im (make-instance 'image)))
(load-image im infile)
(invert im)
(save-image im outfile)))
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht

Michael Callahan

unread,
Dec 20, 1994, 2:01:26ā€ÆAM12/20/94
to
Nick Mein (nm...@bifrost.otago.ac.nz) wrote:

: Anyway, I found the claims rather surprising, so here is a challenge. The


: following toy program loads a 256x256 8 bit grayscale image from a file,
: inverts it, and saves it to another file. I consider myself to be an
: intermediate-level C++ programmer. The program took me an hour (almost to
: the minute) to design, code, debug and test (Ok, I didn't test it very much).
: I look forward to seeing the previous posters' Lisp implementations of the
: same. It should only take them 6 - 30 minutes to knock them together, less
: if they are reasonably experienced in Lisp.


I do most of my programming in C and C++, although I would rather be working
in Lisp. I'd never opened a file in Common Lisp before now, so it took
me a few minutes to thumb through my CLtL2 and find the right keywords for
with-open-file. Here's my version of the program (untested):


(defun invert-file (input-file output-file)
(with-open-file (input-stream input-file
:direction :input
:element-type (unsigned-byte 256))
(with-open-file (output-stream output-file
:direction :output
:if-exists :supersede
:element-type (unsigned-byte 256))
;; read and write of the image header would go here

;; This is bad, I should go until the end of the file, not
;; count out how many bytes. It's not as flexible this way.
(dotimes (i (* 256 256))
(write-byte (- 255 (read-byte input-stream)
output-stream))))))


Well, that's it. I don't know if it's a fair comparison, the C++ version
had a lot of junk in it. If I were quickly prototyping in C:
(Note the emphasis on quickly!)


#include <stdio.h>

main (int argc, char **argv)
{
FILE *in = fopen(argv[1]);
FILE *out = fopen(argv[2]);
unsigned char buffer;

while (!feof(in)){
fread(buffer, 1, 1, in);
buffer = 255 - buffer;
fwrite(buffer, 1, 1, out);
}
}


Of course, I wouldn't recommend either one of these as examples in serious
software design. For that, I'd recommend grabbing an existing package,
like ImageMagick for images, and simply extending that to suit your needs.

mike

William Paul Vrotney

unread,
Dec 20, 1994, 3:13:47ā€ÆAM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz> nm...@bifrost.otago.ac.nz (Nick Mein) writes:

>
> Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
> Vogt have written that programming in Lisp is 2 - 10 time more productive
> than programming in C (two of the posters professed to having little
> experience with C++). Surprisingly, no suggestion is made that this
> productivity advantage is restricted to any particular problem domain (perhaps
> Lisp is 10x more productive for an NLP system, but only 2x more productive
> if you are writing a Unix device driver?).

I don't think that the productivity ratio is related to particular problem
domains as much as it is to the complexity of the problem domains. And so I
would agree with the gist of your last sentence in the above paragraph, but
for different reasons (I wouldn't even guess 2x but more like 1x if that for
the Unix device driver).

Also I should add that I think that it is more than just the complexity of
the problem domain that increases the ratio. Uncertainty is another.
Uncertainty of the algorithm and data representation. This is where
exploratory programming methods are also very helpful. Although complexity
and uncertainty of the problem domain frequently go hand and hand they don't
have to. For example I helped develop an AI scheduling algorithm in Lisp
that took about six months to develop with a lot of intensive exploratory
programming. But the final algorithm turned out to be coded in only about 6
pages of code. Hence the final algorithm was not very complex but the
uncertainty or finding the correct abstractions was the hard part. Now that
I think back at this task, doing the same thing in C++ seems down right
silly. However I could easily recode the algorithm in C++ today in a few
hours.

>
> Anyway, I found the claims rather surprising, so here is a challenge. The
> following toy program loads a 256x256 8 bit grayscale image from a file,
> inverts it, and saves it to another file. I consider myself to be an
> intermediate-level C++ programmer. The program took me an hour (almost to
> the minute) to design, code, debug and test (Ok, I didn't test it very much).
> I look forward to seeing the previous posters' Lisp implementations of the
> same. It should only take them 6 - 30 minutes to knock them together, less
> if they are reasonably experienced in Lisp.
>

> - Negate image program deleted -

In my understanding of the productivity ratio this last paragraph does not
reflect any sort of important proof. Writing a small piece of a BITBLTer is
so well understood and so non-complex that I can not see any advantage to
simply coding this thing in Lisp versus coding it in C++. You only have one
Struct, Image, with two slots and a couple of methods. This hardly
qualifies as a complex program. In fact being a die hard Lisper that I am I
would probably still write this thing in C since it looks like an OS
utility. We already have a very elegant language for writing OS utilities
called C. I wouldn't even waste the time doing this in C++ unless there was
a requirement to provide an object interface.

By the way, I for one lose, it would probably take me more than 30 minutes
from start to finish to write your program in Lisp, so I will not even
start. But, if you really want to demonstrate the effect of the
productivity ratio lets pick a challenge that shows off Lisp and not one
that should obviously be done in C. Here is my challenge:

"You are given spectral sound files representing frequency, amplitude,
bandwidth and time of sounds picked up by hydrophones in the ocean. You are
looking for torpedo sounds but the hydrophones pick up the sounds of the
ship dragging the hydrophones and other miscellaneous noise in the ocean.
Develop an existing standard blackboard system shell and source model and
discover the blackboard level abstractions that allow the top level of the
blackboard to identify a torpedo source specified in the source model.
Hint1: Use the harmonic structure of sound sources for source modeling and
identification. Hint2: You need to consult with human experts on torpedo
sounds and do many experiments with that data to develop the rules."

This was one of my last tasks that I did in approx. 6 months in Lisp. How
long would it take you to do this in C++ given that you do not have our
results? I am very serious about this challenge to make a point but I hope
that you do not start it because I predict that you, assuming you get some
guidance of where to start, will fumble around for the first three months
only to discover that you need another 3 months to reinvent and implement
that portion of Lisp in C++ to give you the interactive visibility you need
to do the experiments. This prediction is based on the AI programs that I
developed in C++ instead of Lisp.

Erik Naggum

unread,
Dec 20, 1994, 5:01:43ā€ÆAM12/20/94
to
[Nick Mein]

| Anyway, I found the claims rather surprising, so here is a challenge.
| The following toy program loads a 256x256 8 bit grayscale image from a
| file, inverts it, and saves it to another file. I consider myself to
| be an intermediate-level C++ programmer. The program took me an hour
| (almost to the minute) to design, code, debug and test (Ok, I didn't
| test it very much). I look forward to seeing the previous posters'
| Lisp implementations of the same. It should only take them 6 - 30
| minutes to knock them together, less if they are reasonably experienced
| in Lisp.

hmmm.

(defun invert-image (image-path inverted-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :input)
(with-open-file (inverted inverted-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)


(dotimes (i (* 256 256))

(write-byte (- 255 (read-byte image)) inverted)))))

4 minutes 50 seconds, including testing on a file made in by this function

(defun make-image (image-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)
(dotimes (i 256)
(dotimes (j 256)
(write-byte j image)))))

add 2 minutes to read and understand your code.

I am not proficient with file I/O in Common LISP, so had to look up
`with-open-file' to get the arguments right.

the functions are also moderately fast.

* (time (make-image "/tmp/test"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:

Evaluation took:
0.27 seconds of real time
0.26 seconds of user run time
0.01 seconds of system run time
0 page faults and
1344 bytes consed.
NIL
* (time (invert-image "/tmp/test" "/tmp/test.out"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:

Evaluation took:
0.56 seconds of real time
0.54 seconds of user run time
0.02 seconds of system run time
0 page faults and
3944 bytes consed.
NIL

CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.

#<Erik>
--
requiescat in pace: Erik Jarve (1944-1994)

Dave Toland

unread,
Dec 20, 1994, 8:37:17ā€ÆAM12/20/94
to
I don't believe a single number for comparing the productivity is reasonable.
Good C++ programming really requires a good design phase before coding is
begun, or there will be lots of false starts. This is probably less true
of Lisp.

But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.

In addition to initial implementation, what about maintainability? Which
language is more reliably modified when the requirements evolve?

Opinions?

--
--------------------------------------------------------------------------
All opinions are MINE MINE MINE, and not necessarily anyone else's.
d...@phlan.sw.stratus.com | "Laddie, you'll be needing something to wash
(Dave Toland) | that doon with."

Mike McDonald

unread,
Dec 20, 1994, 10:09:14ā€ÆAM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz>, nm...@bifrost.otago.ac.nz (Nick Mein) writes:

|> Anyway, I found the claims rather surprising, so here is a challenge. The
|> following toy program loads a 256x256 8 bit grayscale image from a file,
|> inverts it, and saves it to another file. I consider myself to be an
|> intermediate-level C++ programmer. The program took me an hour (almost to
|> the minute) to design, code, debug and test (Ok, I didn't test it very much).
|> I look forward to seeing the previous posters' Lisp implementations of the
|> same. It should only take them 6 - 30 minutes to knock them together, less
|> if they are reasonably experienced in Lisp.
|>
|>
|> // Toy C++ program that reads 256x256 gray-scale image from a file,
|> // and writes out the "negative" of the image.

If I were to write a C++ version, I'd write this:


#include <stdio.h>

main(int argc, char *argv[])
{

int x, y;
FILE *in, *out;

if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);

exit(-1);
}
in = fopen(argv[1], "r");
if (in == NULL) {
fprintf(stderr, "Cannot open \"%s\" for reading.\n", argv[1]);
exit(-1);
}
out = fopen(argv[2], "w");
if (out == NULL) {
fprintf(stderr, "Cannot open \"%s\" for writing.\n", argv[2]);
exit(-1);
}

for (y = 0 ; y < 256 ; y++)
for (x = 0 ; x < 256 ; x++)
putc(255 - getc(in), out);
fclose(in);
fclose(out);
}

Took all of three minutes to type in and compile. Nick's program illustrates
one of the things about the current batch of C++ programmers that bugs the
daylights out of me. There seems to be this feeling that if your using an "object
oriented" language, EVERYTHING has to be an object. What a total bunch of *&%^$!
The other thing Nick's program illustrates is the tendendcy to over spec
and over build. If all you need is a double for loop, just write a double for
loop!

Mike McDonald m...@titan.ess.harris.com

Ken Anderson

unread,
Dec 20, 1994, 6:01:15ā€ÆAM12/20/94
to

Dick Gabriel said that Lucid's experience with developing their
"C" tools (Energize), their experience indicated at least a 3 fold
difference between using C and Lisp as development languages - the
increased being in lisp's favour (i.e., productivity was 3 to 1 improved

Here is the quote i've seen. It is definitely worth reading the entire
article, and the entire "Critic at Large" series:

"My exerience with C++ development started with a group just learning C++
and ended with a fairly expert group. At the start I estimated the
productivity advantage of Lisp/CLOS over C/C++ at a factor of 3 to 4; at
the end I set ist at 30%."[p. 91]

R.P. Gabrial, Productivity: Is there a silver bullet?, Journal of Object
Oriented Programming, March-April, 1994, p. 89-92.

if one used lisp). I agree with LGM, this proves nothing, but is simply
some heresay which is contrary to Mr. Trickey's heresay.

--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

Skip Egdorf

unread,
Dec 20, 1994, 12:46:50ā€ÆPM12/20/94
to
In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.

I believe that history is against you here. I believe that there is
now general agreement that the major problem with the Lisp Machine
architecture being marketed to the general computer world was that the
Lisp Machine was marketed as the ONLY machine that could handle the
very largest and most complex problems. This was bourne out by the
many examples of Symbolics or Explorers being used for the massive
financial analysis expert systems, telephone-system routing, military
command and control modeling...

The problem was that there were still a lot of "simple" applications
that were being done in languages like C and C++, but the developers
didn't want to have to switch from the Sun/PC/Mac being used for the
"simple" problems to the Lisp Machine for the really hard stuff.
The C systems were better able to handle "Hello World".

These simple problems were often the entry-level prototypes for the
truely massive problems, but by the time the power of the Lisp machine
was actually needed, the problem was so entrenched in C (or whatever)
that the switch to the power tool was not perceived as feasable by
management. These projects might then fail due to the poor choice of
language with no acknowledgement that a different language might
actually have helped.

If the project was in C and failed, the conclusion was "It was too
large. No one could have done it."

If the project was in Lisp and failed, the conclusion was "Lisp is
just not a suitable language."

The number of truely massive projects accomplished on the Lisp
Machines is some evidence that these conclusions are both false, but
It would be interesting to get some sort of survey of "very large
applications" to see if a pattern would emerge favoring some language
or design technology.

Skip Egdorf
h...@lanl.gov

Jim McDonald

unread,
Dec 19, 1994, 5:00:20ā€ÆPM12/19/94
to

In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
...

|> It is interesting to muse with the powerful software automation of the
|> "software ICs" paradigm, such as NEXTSTEP, in the future perhaps the only
|> interesting programming will be consuming software ICs, producing software
|> ICs and otherwise exploratory programming. Contrary to popular opinion,
|> perhaps Lisp (or Lisp like languages) will be one of the few surviving human
|> written programming languages of the future.

It is interesting to watch features from Lisp creep into the C community.

A decade ago I was saying that the mainstream language after 2000 is
likely to be an evolutionary descendent of C, but will actually be much
closer to lisp implementations of the 80's than to C implementations of
that era. I think the evolution of C++ has tended to support that
contention. (My regret is that there will be a lot of wasted motion in
the process, witness the C++ world reinventing garbage collection.)

Even Bjarne's defense of C++ expressed pleasure at the speed with which
some lispish features had made it into C++, and many of the implementers
of those features were in fact Lisp programmers trying to carry their
vision of a progamming environment into C++.

Lisp has the great advantage of being based on a firm mathematical
foundation. In the long run, the random walks taken by the C community
are likely to arrive at that same maximum in the space of programming
languages.


Christopher J. Vogt

unread,
Dec 19, 1994, 5:53:01ā€ÆPM12/19/94
to
In article <vrotneyD...@netcom.com>,

William Paul Vrotney <vro...@netcom.com> wrote:
>In article <D0xAI...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:
>
>> In article <BUFF.94De...@pravda.world>,
>> Richard Billington <bu...@pravda.world> wrote:
>> >Dick Gabriel said that Lucid's experience with developing their
>> >"C" tools (Energize), their experience indicated at least a 3 fold
>> >difference between using C and Lisp as development languages - the
>> >increased being in lisp's favour (i.e., productivity was 3 to 1 improved
>> >if one used lisp). I agree with LGM, this proves nothing, but is simply
>> >some heresay which is contrary to Mr. Trickey's heresay.
>>
>> Well, cf Erik Naggum's response to the above, Dick Gabriel in
>> particular and the Lucid team in general must be taken as LisP
>> experts, so that may to some extent weight against their experience as
>> reported above. Nevertheless, if this statement is something Dick has
>> published and provided some quantifiable support for, it would be
>> extremely useful material. There is so *damn* *little* serious study
>> of the comparative productivity of differing programming tools.
>
>It seems like computer science is stuck with surveys here. Is there any
>more scientific approach?

I remember reading a survey years ago when I was at Symbolics, but I
can't find it now.

I think that anything scientific is problematic at best, because you
really need a *big* project, and a large number of people, and how
can that be arranged?

How about somebody sponsoring a programming contest. A team of people
spend a weekend working on a problem, or a set of problems, free to
select what language and what computer they use.

I used to think that Lisp was 10x C (and I have no experience with C++)
but I now believe there is a range between 2x and 5x.

--
Christopher J. Vogt vo...@netcom.com
From: El Eh, CA

Larry Hunter

unread,
Dec 20, 1994, 3:29:15ā€ÆPM12/20/94
to
Nick Mein posted the following challenge:

Anyway, I found the claims [that lisp is 2-10x more productive than C]


rather surprising, so here is a challenge. The following toy program
loads a 256x256 8 bit grayscale image from a file, inverts it, and saves
it to another file. I consider myself to be an intermediate-level C++
programmer. The program took me an hour (almost to the minute) to design,
code, debug and test (Ok, I didn't test it very much). I look forward to
seeing the previous posters' Lisp implementations of the same. It should
only take them 6 - 30 minutes to knock them together, less if they are
reasonably experienced in Lisp.

I am reasonably experienced in lisp. The following program took me 2
minutes and 49 seconds to design, code, debug and test. (I tested it about
as much as you did -- it runs fine on a single example). Most of that time
was spent looking up read/write-byte in the manual because I wasn't sure it
did exactly what I wanted -- it does.

I guess LISP is more like 20x more productive. And much easier to read,
understand and debug.


;; This reads a file of (height * width) 8 bit values, inverts them, and
;; writes them to another file. It will signal errors in pretty much the same
;; situations that the C program will.

(defun invert (infile outfile &key (height 256) (width 256))
(with-open-file (in infile)
(with-open-file (out outfile :direction :output)
(dotimes (x height)
(dotimes (y width)
(write-byte (- 255 (read-byte in)) out))))))


--
Lawrence Hunter, PhD.
National Library of Medicine
Bldg. 38A, 9th floor
Bethesda. MD 20894 USA
tel: +1 (301) 496-9300
fax: +1 (301) 496-0673
internet: hun...@nlm.nih.gov
encryption: RIPEM via server; PGP via "finger hun...@work.nlm.nih.gov"

Dave Toland

unread,
Dec 20, 1994, 12:54:34ā€ÆPM12/20/94
to
In article <3d6s2q$h...@jabba.ess.harris.com>, m...@Titan.ESS.Harris.com (Mike McDonald) writes:
> If I were to write a C++ version, I'd write this:

<C (sic) program deleted to save on Spam cans>

> Took all of three minutes to type in and compile. Nick's program illustrates
> one of the things about the current batch of C++ programmers that bugs the
> daylights out of me. There seems to be this feeling that if your using an "object
> oriented" language, EVERYTHING has to be an object. What a total bunch of *&%^$!
> The other thing Nick's program illustrates is the tendendcy to over spec
> and over build. If all you need is a double for loop, just write a double for
> loop!

I can write a C program that looks a lot like BASIC or FORTRAN too. Yes
the C program you wrote is also a C++ program. For that matter, the
original example that Neil submitted could have used iostream constructs
instead of stdio functions, to be "purer".

I don't think anyone would argue against the statement that a small program
in C++ takes more design and planning than one in C, or Lisp. But when
the program gets larger, the pieces tend to fit together with fewer
deleterious interactions than programs written from a procedural
mindset. On the other hand, Lisp takes a functional approach, with
procedural features tacked on to the language after. My question
really comes down to, "Does the functional approach of Lisp scale up
as well as the OO approach of C++ (and other OO languages)."

I'm really talking less about the languages themselves than about the
programming paradigms they are designed to facilitate, although
inevitably, how well the language assists the user in holding to the
paradigm is also relevent.

I think it's also fair to note that the best choice for a given problem
domain may not be the best choice for a different problem domain, even if
the overall size and complexity of the solution is on the same order
of magnitude.

I'd prefer to keep the discussion in the newsgroup, though. I really
don't want to fill my mailbox with "Why my favorite language is the best".
I think though that a good engineer will be able to choose between
different languages for a given project on a reasonably impartial basis,
not that such a choice is always offered. I believe discussions such as
this can help us see where strong and weak points of each language
and design philosophy are, and tehrefore give us better ammunition for
an informed choice.

Stan Friesen

unread,
Dec 20, 1994, 3:14:50ā€ÆPM12/20/94
to
In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
|>
|> This is close to my experience. I have mentioned that there feels like a 5
|> to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
|> experts in both Lisp and C++ and the usual response is that it is too
|> conservative an estimate. One part of the big ratio is attributed to the
|> lack of a good C++ library. I created a Lispy C++ library for myself which
|> includes lists, dynamic typing and other nice Lispisms, ...

You did what??

This sounds like you are trying to program C++ as if it were Lisp.

This is certainly NOT going to get the most effective use out of C++.

Effective C++ programming style requires a different approach than
Lisp programming - the best solution of a given problem is often
different in the two languages.

[This is probably even *more* true of Lisp vs. C++ than it is of
Smalltalk vs. C++ - and Smalltalkish C++ code is very kludgy C++].

|> This leads me to believe that for a complex enough application, like a Go
|> program, it is better to develop it in Lisp then recode in C++ in the last
|> two weeks before delivery. Or better yet compile efficiently (enough) in
|> Lisp when that becomes more feasible.

Or it is better designed up front.

"exploratory programming" is *one* way to handle complexity.
Formal design is another way to handle complexity.

--
s...@elsegundoca.attgis.com sar...@netcom.com

The peace of God be with you.

Stan Friesen

unread,
Dec 20, 1994, 3:26:10ā€ÆPM12/20/94
to
In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
|> that should obviously be done in C. Here is my challenge:
|>
|> "You are given spectral sound files representing frequency, amplitude,
|> bandwidth and time of sounds picked up by hydrophones in the ocean. You are
|> looking for torpedo sounds but the hydrophones pick up the sounds of the
|> ship dragging the hydrophones and other miscellaneous noise in the ocean.
|> Develop an existing standard blackboard system shell and source model and
|> discover the blackboard level abstractions that allow the top level of the
|> blackboard to identify a torpedo source specified in the source model.
|> Hint1: Use the harmonic structure of sound sources for source modeling and
|> identification. Hint2: You need to consult with human experts on torpedo
|> sounds and do many experiments with that data to develop the rules."

This is, in some ways, both overspecified and underspecified.

It contains in the problem definition some implementation details
(for instance is the "blackboard" paradigm really required by the
problem to be solved, or is it merely the traditional way to
handle this sort of problem in Lisp?).

On the other hand, a full specification needs to at least *refer*
to a spectral decomposition of torpedo nose. Or, to put it another
way - I would follow Hint2 *before* I even started designing, let alone
coding, and would consider that time to be independent of the programming
effort, and therefore independent of the language I was using to implement.
Without the results of Hint2, I consider the problem spec incomplete.

Now, I can see that it might take months to gather the data required,
but I do not see that much programming is really necessary to do so,
just sound experimental design, and a decent mathematical model.
[Oh, I might use Macsyma or some such thing to manipulate the
mathematical model while tuning it - but that is not really
necessary to the process].

[P.S. are you sure that project isn't classified?].

David Hanley

unread,
Dec 20, 1994, 9:14:20ā€ÆPM12/20/94
to
Nick Mein (nm...@bifrost.otago.ac.nz)

Ok program, just a few complaints:

: #define MEM_ALLOC_ERROR -1


: #define FILE_OPEN_ERROR -2
: #define FILE_READ_ERROR -3
: #define FILE_WRITE_ERROR -4

Why not make these const ints?

: const int N = 256;

N is pretty un-descriptive

: void Image::Invert()
: {
: unsigned char* p;

: assert (data);

: for (p = data; p != data + width * height; p++)
: *p = 255 - *p;

: }

This could be written *much* faster, to wit:
int *p, e = ( int * ) ( data + width * height );
assert( p != NULL ); //Much nicer this way.
for( p = (int*) data ; p < e ; ++p)
*p ^= 0xffffffff;

There's a fwe bogeys here, but this should be at least 4 times
faster than the version you gave, on a 32-bit machine.

--
------------------------------------------------------------------------------
| David James Hanley -- dha...@lac.eecs.uic.edu -- C++, OOD, martial arts |
| Laboratory for advanced computing | My employer barely KNOWS me. |
------------------------------------------------------------------------------
"There are trivial truths & there are great truths. The opposite of a
trivial truth is plainly false."
"The opposite of a great truth is also true."

-Neils Bohr

Thomas H. Moog

unread,
Dec 20, 1994, 10:21:21ā€ÆPM12/20/94
to
Both the C++ and the Lisp versions are pre-occupied with input/output.

In a Lisp environment (SmallTalk enviroment, APL environment, even
Basic !) the data could be loaded once and then manipulated in many
ways: copied, rescaled, compressed, using builtin language features.
In a C/C++ environment, the same goal must be accomplished with an
independent scripting language, shell variables etc. Lisp has a more
unified approach which is ideal for exploratory programming.

I'm an intermediate level C++ programmer, but I think that C++ is
complex. It is NECESSARY complexity which arises from two design
decisions: it must be a superset of C and it must only have features
which can be implemented efficiently on a typical processor. It met
its design goals AND it is complex.

There is a way to measure language complexity to one significant
digit: the size of a good manual for the language. The essentials for
SmallTalk are covered in about 100 pages of the blue book by Goldberg
(the rest is the "standard library" and details of implementation).
The C++ ARM is about 400 pages and is much more difficult reading then
Goldberg. How large is a Lisp manual ?

Tom Moog

William Paul Vrotney

unread,
Dec 21, 1994, 12:07:06ā€ÆAM12/21/94
to

> In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
> |>
> |> This is close to my experience. I have mentioned that there feels like a 5
> |> to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
> |> experts in both Lisp and C++ and the usual response is that it is too
> |> conservative an estimate. One part of the big ratio is attributed to the
> |> lack of a good C++ library. I created a Lispy C++ library for myself which
> |> includes lists, dynamic typing and other nice Lispisms, ...
>
> You did what??
>
> This sounds like you are trying to program C++ as if it were Lisp.

No, I just want to have some of the library capability in C++ that I have in
lisp.

>
> This is certainly NOT going to get the most effective use out of C++.

Who cares? My main concern is to get the program working, not half working
with lots of gotchas and memory leaks. I've seen too many C++ programs in
this state. I'm wide open to suggestions as how to get the most effective
use out of C++.

>
> Effective C++ programming style requires a different approach than
> Lisp programming - the best solution of a given problem is often
> different in the two languages.
>

I think that I've tried this "C++ programming style" that you speak of and I
don't like it. I've read some of the C++ books. If I had to characterize
it I would say that this style (if I'm hearing you) is too static, and I
have to go way out of my way to write extraneous code to support this static
style, which introduces all sorts bugs and obscurities. That's why I've
changed to a more Lispy programming style. I find that it is quite
effective.

>
> |> This leads me to believe that for a complex enough application, like a Go
> |> program, it is better to develop it in Lisp then recode in C++ in the last
> |> two weeks before delivery. Or better yet compile efficiently (enough) in
> |> Lisp when that becomes more feasible.
>
> Or it is better designed up front.
>
> "exploratory programming" is *one* way to handle complexity.
> Formal design is another way to handle complexity.
>

Well, I'm telling you from experience that the Formal Design approach for
the kinds of problem that I mention is going to fail you. You can sit in a
room and formally design all you want but you are not going to get anywhere
until you get some visualization of your data world.

Nick Mein

unread,
Dec 21, 1994, 1:51:39ā€ÆAM12/21/94
to

Several people (Michael Callahan, Erik Naggum, Mike McDonald, Larry Hunter)
seemed to miss the point of my challenge. Of course I could have
written the toy program in a few lines of C. I tried to come up with a problem
that C++ seemed well suited to, but that did not take hours or pages
of code to solve. A real Image class would have other operations
defined than load, invert and save; and would be part of a much
larger program (and ideally reusable, so that it could be used in many such
programs).

From: mar...@mosaic.nyu.edu (Marco Antoniotti)

> 20 Minutes. I do not touch type. I tried to write a "multidimensional"
> 'read-sequence' (a function with that name will be in the upcoming CL
> ANSI, the first ANSI OO Language standard), decided it was not worth
> the effort, went back to the original version, corrected a few errors
> (there are still a few around), checked CLtL2 a couple of times
> (actually more) and looked up some documentation strings on my
> faithful AKCL.

> Pretty good, isn't it? :)

Yes, I'm impressed. Although I did have to change the names of functions
load-image and save-image to get your program to run (on a Harlequin
LispWorks system). Portability problems in the Lisp world?

> Moreover, the CL program does more than the C++ program

Could you be more specific? I don't read Lisp well enough to be able to
detect the extra functionality. Actually, I think that you have supplied
somewhat less than I have. Your image class (which, admittedly, is the
significant part of the example) is fine, but your test function does not
perform the command line parsing and handling of errors performed
by main.

Also (maybe significantly) there does not appear to be the equivalent
of my "assert(data)" - an important check that some client of the class
will not get away with doing something stupid - like manipulating an image
that hasn't been loaded yet!


From: call...@xmission.com (Michael Callahan)

> I do most of my programming in C and C++, although I would rather be working
> in Lisp.

> the C++ version had a lot of junk in it.

Thanks for the insightful and informative comment.

> main (int argc, char **argv)
> {
> FILE *in = fopen(argv[1]);
> FILE *out = fopen(argv[2]);
> unsigned char buffer;
>
> while (!feof(in)){
> fread(buffer, 1, 1, in);
> buffer = 255 - buffer;
> fwrite(buffer, 1, 1, out);
> }
> }

Oh, I see. Checking the number of command line arguments, checking
the return value of system calls, returning a meaningful value to
the system is a lot of junk.

Making those 65536 calls to fread and fwrite was pretty smart - should
really do wonders for performance.

I'd prefer it if you were working in Lisp as well.


From: vro...@netcom.com (William Paul Vrotney)

>> Lisp is 10x more productive for an NLP system, but only 2x more productive
>> if you are writing a Unix device driver?).

> I don't think that the productivity ratio is related to particular problem


> domains as much as it is to the complexity of the problem domains. And so I
> would agree with the gist of your last sentence in the above paragraph, but
> for different reasons (I wouldn't even guess 2x but more like 1x if that for
> the Unix device driver).

Really? It's always scary when you try to be sarcastic and somebody takes
you seriously.

> Also I should add that I think that it is more than just the complexity of
> the problem domain that increases the ratio. Uncertainty is another.
> Uncertainty of the algorithm and data representation. This is where
> exploratory programming methods are also very helpful.

I am not prepared to get into a debate on "exploratory programming" at the
moment (is this a technical term with its own literature?). Certainly
no one claims that C++ is suitable for all programming tasks. What is
at issue is whether or not Lisp is more suitable than C++ for (almost)
all tasks.

> Writing a small piece of a BITBLTer is
> so well understood and so non-complex that I can not see any advantage to
> simply coding this thing in Lisp versus coding it in C++. You only have one
> Struct, Image, with two slots and a couple of methods. This hardly
> qualifies as a complex program.

Ah, but complex systems can be decomposed into simpler components. See
Booch for an extensive discussion of this issue.

> By the way, I for one lose, it would probably take me more than 30 minutes
> from start to finish to write your program in Lisp, so I will not even
> start. But, if you really want to demonstrate the effect of the
> productivity ratio lets pick a challenge that shows off Lisp and not one

> that should obviously be done in C.

No. To make the challenge convincing you would have to pick a challenge that
shows off C++, and demonstate that it was _still_ more productive to use
Lisp than to use C++.


From: Erik Naggum <er...@naggum.no>

> the functions are also moderately fast.

> * (time (make-image "/tmp/test"))
> Compiling LAMBDA NIL:
> Compiling Top-Level Form:

> Evaluation took:
> 0.27 seconds of real time
> 0.26 seconds of user run time
> 0.01 seconds of system run time
> 0 page faults and
> 1344 bytes consed.
> NIL
> * (time (invert-image "/tmp/test" "/tmp/test.out"))
> Compiling LAMBDA NIL:
> Compiling Top-Level Form:

> Evaluation took:
> 0.56 seconds of real time
> 0.54 seconds of user run time
> 0.02 seconds of system run time
> 0 page faults and
> 3944 bytes consed.
> NIL

> CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.

Depends what you mean by fast, I suppose.

time invert test.img out.img
0.0u 0.0s 0:00 100% 57+253k 0+12io 0pf+0w

on a DECStation 5000.

From: d...@sw.stratus.com (Dave Toland)

> For that matter, the original example that Neil submitted could have used
> iostream constructs instead of stdio functions, to be "purer".


It is actually Nick, not Neil. Yes, I've been meaning to look up iostreams
one day.

> I don't think anyone would argue against the statement that a small program
> in C++ takes more design and planning than one in C, or Lisp. But when
> the program gets larger, the pieces tend to fit together with fewer
> deleterious interactions than programs written from a procedural
> mindset. On the other hand, Lisp takes a functional approach, with
> procedural features tacked on to the language after. My question
> really comes down to, "Does the functional approach of Lisp scale up
> as well as the OO approach of C++ (and other OO languages)."

I largely agree with this post, but I imagine that when people have been
referring to Lisp they mean that CLOS, which is also object-oriented.


From: mcdo...@kestrel.edu (Jim McDonald)

> There are at leas a couple of problems with this challenge.

> The main problem is that you have given an overly specified problem.

I was hoping that people would (if they took the challenge seriously at all)
look at the spirit of the challenge without me laying down the rules
explicitly. Marco Antoniotti did so, and so did you with your second & third
versions. Thanks.

> The second problem is that the task you chose to measure is very low
> level and doesn't really use any interesting features of a language--
> it's almost nothing more than a few calls to the OS.

But my example does use an interesting feature of C++ - encapsulation.

> I'll do three versions:
> one very simple and close to your example, a second slightly
> more abstract, and a third closer to the problem I've given
> above.

> SECOND VERSION
> I used parameterized types and sizes, 2D arrays, and added an
> explicit test routine.

This sounds a bit closer to my version. Adding a header to the image
file didn't seem to add anything interesting to the problem.

> This time the code was written in a
> file (instead of at the lisp listener as above) and compiled.
> I compiled it a second time with the optimizing compiler but
> the test showed very little timing improvement in this case.

> > (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")
> Elapsed Real Time = 5.40 seconds
> Total Run Time = 5.33 seconds
> User Run Time = 5.27 seconds
> System Run Time = 0.06 seconds
> Process Page Faults = 12
> Dynamic Bytes Consed = 131,088
> Ephemeral Bytes Consed = 5,184
> Elapsed Real Time = 5.14 seconds
> Total Run Time = 5.09 seconds
> User Run Time = 5.06 seconds
> System Run Time = 0.03 seconds
> Process Page Faults = 12
> Dynamic Bytes Consed = 131,088
> Ephemeral Bytes Consed = 5,168

The run times speak for themselves.

Erik Naggum

unread,
Dec 21, 1994, 3:14:31ā€ÆAM12/21/94
to
[Thomas H. Moog]

| There is a way to measure language complexity to one significant
| digit: the size of a good manual for the language. The essentials for
| SmallTalk are covered in about 100 pages of the blue book by Goldberg
| (the rest is the "standard library" and details of implementation).
| The C++ ARM is about 400 pages and is much more difficult reading then
| Goldberg. How large is a Lisp manual ?

[Andrew Koenig]

| The Common Lisp manual is about 1,100 pages.

which language is more complex? Modula-2 or Ada? let's ask ISO JTC 1 SC
22 who recently published Draft International Standards for both languages.

ISO DIS 8652 Ada: 551 pages.
ISO DIS 10514 Modula-2: 707 pages.

clearly, Modula-2 is more complex than Ada.

now, is Prolog a complex language? again, let's ask ISO JTC 1 SC 22.

ISO DIS 13211-1 Prolog -- general core: 191 pages.

no, Prolog is a simple language.

now, let's compare more standards by this universal method.

ISO 8879 SGML: 155 pages (of which 100 pages are informative)
ISO 10744 HyTime: 125 pages

we can conclude that SGML is very simple, and that HyTime is much simpler
than Prolog.

let's try some character set standards.

ISO 2022: 47 pages.
ISO 10646: 754 pages.

_obviously_, ISO 2022 is the simpler of the standards.

those who know these standards observe that Modula-2 is specified with VDM
(Vienna Definition Method) which defines the formal semantics of the
language in a slightly verbose form. Modula-2 is a big standard because it
is defined with very fine granularity.

Ada is clearly a more complex language than Modula-2, but it is also a very
solid standard. its semantics is well-defined.

C++ is a language which leaves a lot to the implemenation. if 5 % 2 is 1,
does -5 % 2 yield 1 or -1? well, your _implementation_ will tell you.
this is because C++, like C, doesn't want to impose on the hardware
division instruction or the compiler support for it, but rather wants you,
the programmer, to know your hardware before you can divide two numbers.
this is what I consider to be complexity -- randomness.

SGML is generally known to be overly complex, HyTime is too complex for any
human being to handle fully, and the standards are so hard to read that it
is impossible to decide whether they are complete or not. they are
generally conceded to be inferior as standards documents. their brevity is
not an asset.

ISO 2022 is so complex to use in its full form that people weep just from
being exposed to it. ISO 10646 is a straightforward, 16-bit character set
standard that just happens to contain about 700 pages of character tables.

is ASN.1 complex? no, the standard is only 50 pages long.

gentlemen, with all due respect, if you think page count is a measure of
anything related to _complexity_, you should have your heads examined. or
be exposed to some actual specifications of known complex topics. that you
think there is a relation only tells everyone who has been exposed to
complex topics and their specifications that you don't know what you're
talking about.

Scott McLoughlin

unread,
Dec 20, 1994, 8:06:19ā€ÆPM12/20/94
to
d...@sw.stratus.com (Dave Toland) writes:

> procedural features tacked on to the language after. My question
> really comes down to, "Does the functional approach of Lisp scale up
> as well as the OO approach of C++ (and other OO languages)."
>

Howdy,
Good question. I'd answer with a resounding yes. Common Lisp
includes CLOS, which "scales up" extremely well. See the papers in
"Object Oriented Programming: The CLOS Perspective" for some wild
and nifty examples of "scaling up" (a Good Read, IMHO, in any case).
In general, though, the Lisp family languages include
constructs that scale up to most problems, whether or not the language
includes native OOP constructs. Problem seems to be scaling
down ;-) This is where C/Pascal/C++ really shine.

=============================================
Scott McLoughlin
Conscious Computing
=============================================

David Hanley

unread,
Dec 21, 1994, 8:04:02ā€ÆAM12/21/94
to
David Hanley (dhanley@picasso)

In fact, even better:

void Reverse( char *f1 , char *f2 )
{
FILE *in = fopen( f1 , "r" );
FILE *out = fopen( f2 , "w" );

int i;
while( fread( &i , 4 , 1 , in ) == 1 )
{
i ^= 0xffffffff;
fwrite( &i , 4 , 1 , out );
}
fclose( in );
fclose( out );
}

~2 minutes.

Curt Eggemeyer

unread,
Dec 21, 1994, 8:56:36ā€ÆAM12/21/94
to
In article <EGDORF.94D...@zaphod.lanl.gov> egd...@zaphod.lanl.gov (Skip Egdorf) writes:
>In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:
>
> But which language scales better to *large* problems? Maybe Lisp gurus will
> disagree with me, but I believe that C++ is better suited to very large
> applications.
>
>I believe that history is against you here. <rest removed>


Here at the lab, high-level complex problems are usually prototyped in LISP and
then later ported to C or C++. A good example of this was JSC's (Johnson
Space Center) development of CLIPS derived from Inference's ART (forward-
chainer rule-base system).

One of the C-world's problem is that the language's implementation gets in
the way of the representation of the problem. It seems that many developments
tend to "pin" down how to represent the problem in LISP first, because of its
"natural" extensibility and easy way of dealing with the abstract without the
need to memory manage. Once these principals are defined, you can then port
over to the static language environments. Of course, many of the LISP apps
could also be optimized and I would argue in the end you could probably get
"speedwise" within 10% of C/C++.


Stephen J Bevan

unread,
Dec 21, 1994, 9:15:38ā€ÆAM12/21/94
to
In article <D15BJ...@research.att.com> a...@research.att.com (Andrew Koenig) writes:

In article <3d86vh$1...@Mars.mcs.com> tm...@MCS.COM (Thomas H. Moog) writes:

> There is a way to measure language complexity to one significant
> digit: the size of a good manual for the language. The essentials for
> SmallTalk are covered in about 100 pages of the blue book by Goldberg
> (the rest is the "standard library" and details of implementation).
> The C++ ARM is about 400 pages and is much more difficult reading then
> Goldberg. How large is a Lisp manual ?

The Common Lisp manual is about 1,100 pages.

The Scheme manual is about 50 pages.

Lawrence G. Mayka

unread,
Dec 21, 1994, 10:25:05ā€ÆAM12/21/94
to
In article <3d75oq$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

I don't think anyone would argue against the statement that a small program
in C++ takes more design and planning than one in C, or Lisp. But when
the program gets larger, the pieces tend to fit together with fewer
deleterious interactions than programs written from a procedural
mindset. On the other hand, Lisp takes a functional approach, with
procedural features tacked on to the language after. My question
really comes down to, "Does the functional approach of Lisp scale up
as well as the OO approach of C++ (and other OO languages)."

Let me bring you up to date: Flavors, a first-generation
object-oriented programming system that included multiple inheritance
and method combination, was added to MIT Lisp around 1979, began
selling commercially in 1981, and was subsequently adopted as a de
facto standard by major Common Lisp vendors. CLOS, a
second-generation OOP system that adds multiple-argument dispatch
capability, was voted into the draft ANSI standard for Common Lisp in
1988 following an experimentation period, and was subsequently folded
into all major commercial and free-of-charge implementations,
replacing Flavors. ANSI should be proclaiming the Common Lisp
standard officially within a few weeks from now.

In ANSI Common Lisp, every information element is an object, belonging
to a class. Classes form an inheritance lattice, and can be
meta-classified as built-in (not further inheritable), structured
(extendible via single inheritance), and standard (extendible via
multiple inheritance). Methods can specialize on any class, including
built-in classes, as well as on individual objects. Methods can
specialize on any or all of their required positional arguments, not
just the first. Most major Common Lisp implementations also support a
meta-object protocol which permits the extension of the OOP system
itself.

In short, the Common Lisp community actually has more experience with
the OOP paradigm than perhaps anyone else.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Lawrence G. Mayka

unread,
Dec 21, 1994, 10:01:32ā€ÆAM12/21/94
to
In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

In addition to initial implementation, what about maintainability? Which
language is more reliably modified when the requirements evolve?

I certainly agree that =evolvability= is perhaps the most important
characteristic of a large application nowadays. No one can afford to
build a stone-monolith application that crumbles as soon as the
requirements change. Note that this pertains not only to application
maintenance, but to initial implementation as well, insofar as
requirements can and do change even during the initial architecture,
design, and implementation. A company that refuses to recognize this
reality will repeatedly build products that are obsolete (from the
customer's viewpoint) before they leave the shop.

Similarly, evolvability is probably the most important characteristic
of a programming language as well. A programming language, together
with its implementations, libraries, development environments, and
associated infrastructure, is itself a very large application of
computing technology to meet customer needs. As these needs change,
so must the language, its implementations, libraries, and
environments. Once again, a language that refuses to recognize this
reality will be obsolete before it even achieves standardization.

Perhaps those with significant experience in both CLOS and C++ can
comment on the ability of those languages, and the large applications
built with them, to evolve smoothly and rapidly to meet customer
needs.

Jim McDonald

unread,
Dec 20, 1994, 10:29:01ā€ÆPM12/20/94
to

Note: My reply is rather long because I've included transcripts for three
sessions, including one testing a program that is substantially more
capable than Nick requested.
The functions and files alluded to here are available on request
(not that I think they're worth much) to avoid making this post
impossibly long.

In article <3d5alh$6...@celebrian.otago.ac.nz>, nm...@bifrost.otago.ac.nz (Nick Mein) writes:
|>

|> Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
|> Vogt have written that programming in Lisp is 2 - 10 time more productive
|> than programming in C (two of the posters professed to having little
|> experience with C++). Surprisingly, no suggestion is made that this
|> productivity advantage is restricted to any particular problem domain (perhaps

|> Lisp is 10x more productive for an NLP system, but only 2x more productive
|> if you are writing a Unix device driver?).
|>

|> Anyway, I found the claims rather surprising, so here is a challenge. The


|> following toy program loads a 256x256 8 bit grayscale image from a file,
|> inverts it, and saves it to another file. I consider myself to be an
|> intermediate-level C++ programmer. The program took me an hour (almost to
|> the minute) to design, code, debug and test (Ok, I didn't test it very much).
|> I look forward to seeing the previous posters' Lisp implementations of the
|> same. It should only take them 6 - 30 minutes to knock them together, less
|> if they are reasonably experienced in Lisp.
|>

[text of toy C++ prgram deleted]

There are at leas a couple of problems with this challenge.

The main problem is that you have given an overly specified problem.

Better would be to say something like this:

Read an N dimensional image from a file where the byte size and
dimensions are given by data in the file.

Perform a series of operations on that image.

Save the result to another file.

For simple test cases, use one inversion to see that a different
file results, and two inversions to get a fixpoint.

The second problem is that the task you chose to measure is very low
level and doesn't really use any interesting features of a language--
it's almost nothing more than a few calls to the OS.

But, lest I be blamed for whining, I'll do three versions:


one very simple and close to your example, a second slightly
more abstract, and a third closer to the problem I've given
above.

TRANSCRIPT FOR SIMPLE VERSION
Total elapsed time: 4 minutes, 8 seconds to write and (crudely) test.

.sa_394. date
Tue Dec 20 16:36:30 PST 1994
.sa_395. ./bin/lisp
[startup messages elided...]

> (defun invert-image (in-file out-file)
(with-open-file (in-stream in-file :element-type '(unsigned-byte 8))
(with-open-file (out-stream out-file :element-type '(unsigned-byte 8)
:direction :output)


(dotimes (i 256)
(dotimes (j 256)

(write-byte (- 255 (read-byte in-stream)) out-stream))))))
INVERT-IMAGE
> (compile *)
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
INVERT-IMAGE
> (with-open-file (s "/tmp/hack" :element-type '(unsigned-byte 8) :direction :output)
(dotimes (i 256) (dotimes (j 256) (write-byte 11 s))))
NIL
> (time (invert-image "/tmp/hack" "/tmp/hack2"))
Elapsed Real Time = 4.85 seconds
Total Run Time = 4.78 seconds
User Run Time = 4.77 seconds
System Run Time = 0.01 seconds
Process Page Faults = 47
Dynamic Bytes Consed = 0
Ephemeral Bytes Consed = 3,472
NIL
> (time (invert-image "/tmp/hack2" "/tmp/hack3"))
Elapsed Real Time = 5.03 seconds
Total Run Time = 4.79 seconds
User Run Time = 4.77 seconds
System Run Time = 0.02 seconds
Process Page Faults = 18
Dynamic Bytes Consed = 0
Ephemeral Bytes Consed = 3,424
NIL
> (quit)
.sa_396. diff /tmp/hack /tmp/hack3
.sa_397. date
Tue Dec 20 16:40:38 PST 1994
.sa_398.

SECOND VERSION
I used parameterized types and sizes, 2D arrays, and added an

explicit test routine. This time the code was written in a


file (instead of at the lisp listener as above) and compiled.
I compiled it a second time with the optimizing compiler but
the test showed very little timing improvement in this case.

Note that the test uses /bin/diff to compare the resulting files.
Total elapsed time: 9 minutes, 26 seconds.

.sa_399. date
Tue Dec 20 16:40:39 PST 1994
.sa_400. ./bin/lisp
[startup messages elided...]
[preparation of hack.lisp not shown here]
> (load (compile-file "~/hack.lisp"))
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
;;; Reading source file "/usr/home/kestrel/mcdonald/hack.lisp"
;;; Writing binary file "/usr/home/kestrel/mcdonald/hack.sbin"
;;; Loading binary file "/usr/home/kestrel/mcdonald/hack.sbin"
#P"/usr/home/kestrel/mcdonald/hack.sbin"
> (prepare-test "/tmp/hack1")
NIL


> (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")

Elapsed Real Time = 6.18 seconds
Total Run Time = 6.11 seconds
User Run Time = 6.05 seconds


System Run Time = 0.06 seconds

Process Page Faults = 19


Dynamic Bytes Consed = 131,088

Ephemeral Bytes Consed = 5,216
Elapsed Real Time = 5.97 seconds
Total Run Time = 5.92 seconds
User Run Time = 5.88 seconds
System Run Time = 0.04 seconds
Process Page Faults = 18


Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,168


Diff1:

(NIL NIL 0 NIL)

Diff2:
Binary files /tmp/hack1 and /tmp/hack2 differ

(NIL NIL 1 NIL)
NIL
> (proclaim '(optimize (speed 3) (safety 0) (compilation-speed 0)))
T
> (load (compile-file "~/hack.lisp"))
;;; You are using the compiler in PRODUCTION mode (compilation-speed = 0)
;;; If you want shorter compile time at the expense of reduced optimization,
;;; you should use the development mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 3)))
;;; Generation of runtime error checking code is disabled (safety = 0)
;;; Optimization of tail calls is enabled (speed = 3)
;;; Reading source file "/usr/home/kestrel/mcdonald/hack.lisp"
;;; Writing binary file "/usr/home/kestrel/mcdonald/hack.sbin"
;;; Loading binary file "/usr/home/kestrel/mcdonald/hack.sbin"
#P"/usr/home/kestrel/mcdonald/hack.sbin"


> (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")
Elapsed Real Time = 5.40 seconds
Total Run Time = 5.33 seconds
User Run Time = 5.27 seconds
System Run Time = 0.06 seconds
Process Page Faults = 12
Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,184
Elapsed Real Time = 5.14 seconds
Total Run Time = 5.09 seconds
User Run Time = 5.06 seconds
System Run Time = 0.03 seconds
Process Page Faults = 12
Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,168


Diff1:

(NIL NIL 0 NIL)

Diff2:
Binary files /tmp/hack1 and /tmp/hack2 differ

(NIL NIL 1 NIL)
NIL
> (quit)
.sa_401. date
Tue Dec 20 16:50:05 PST 1994

THIRD VERSION (Timing is problematic since I had to deal with many
other things [like work!] interleaved with this -- estimate is 30
minutes during an elapsed time of about 90 minutes.)

This version has a file format that lets you specify the byte size
and the dimensions of the image (e.g. 256*256, 100*100*8, etc.).
There is a reasonably general mapping function that lets you
apply a sequence of functions to the locations in one image to
pointwise produce a new image.

The test scenario for this version is
(1) Make an image file of size 256 * 256 * 4 with 16-bit bytes,
initialized with 1's for all 262,144 elements.
Note that the image dimensions and byte size are parameters
passed to the routines to be tested.
(2) Read that image file.
(3) Create a partial inversion of the image by inverting all the
elements in the first plane (i.e., at positions whose third
index is 0), but keep the 3 other planes intact.
(4) Write the result back out to a second file.
(5) Repeat 2,3,4 starting with the second file, to get an image
in a third file that is a partial inversion of the second file.
This file should be equivalent to the first file.
(6) Repeat 2,3,4 but for step 3 use the inversion function twice
before writing the fourth file, which also should be
equivalent to the first file.

It should also take less than a minute to prepare and start other
tests (e.g., compute a new value at x,y that averages the 9 values
found at the locations around x,y, or compute the gradient at each
point from the old image and store it in planes 2 and 3, etc),
plus maybe a minute to run them.

So the challenge back to you is to provide a C++ version with
that functionality. Can you do it in less than an hour?

[preparation of hack3.lisp occurred bere]
.sa_411. date
Tue Dec 20 18:20:43 PST 1994
.sa_412. ./bin/lisp

lisp startup messages elided...

> (load (compile-file "hack3.lisp"))
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
;;; Reading source file "hack3.lisp"
;;; Writing binary file "hack3.sbin"
;;; Loading binary file "hack3.sbin"
#P"/usr/home/kestrel/mcdonald/hack3.sbin"

> (time (prepare-test "/tmp/jjj.xxx" '(unsigned-byte 16) '(256 256 4)))
Elapsed Real Time = 18.83 seconds
Total Run Time = 18.46 seconds
User Run Time = 18.19 seconds
System Run Time = 0.27 seconds
Process Page Faults = 81
Dynamic Bytes Consed = 524,296
Ephemeral Bytes Consed = 11,280,720
There were 21 ephemeral GCs
NIL

> (test "/tmp/jjj.xxx" "/tmp/jjj.two" "/tmp/jjj.three" "/tmp/jjj.four")

[copying jjj.xxx to jjj.two, inverting first plane]
Elapsed Real Time = 57.87 seconds
Total Run Time = 56.91 seconds
User Run Time = 56.45 seconds
System Run Time = 0.46 seconds
Process Page Faults = 70
Dynamic Bytes Consed = 1,048,712
Ephemeral Bytes Consed = 41,960,424
There were 80 ephemeral GCs

[copying jjj.two to jjj.three inverting first plane]
Elapsed Real Time = 59.70 seconds
Total Run Time = 57.48 seconds
User Run Time = 57.11 seconds
System Run Time = 0.37 seconds
Process Page Faults = 117
Dynamic Bytes Consed = 1,082,288
Ephemeral Bytes Consed = 41,960,624
There were 2 dynamic GCs
There were 81 ephemeral GCs

[copying jjj.one to jjj.four inverting inversion of first plane]
Elapsed Real Time = 81.40 seconds (1 minute, 21.40 seconds)
Total Run Time = 78.81 seconds (1 minute, 18.81 seconds)
User Run Time = 77.85 seconds (1 minute, 17.85 seconds)
System Run Time = 0.96 seconds
Process Page Faults = 68
Dynamic Bytes Consed = 2,104,904
Ephemeral Bytes Consed = 61,626,624
There were 2 dynamic GCs
There were 118 ephemeral GCs


Diff 1 2 -- File 2 is inversion of file 1
Binary files /tmp/jjj.xxx and /tmp/jjj.two differ

(NIL NIL 1 NIL)

Diff 1 3 -- File 3 is inversion of file 2

(NIL NIL 0 NIL)

Diff 1 4 -- File 4 is inversion of inversion of file 1

(NIL NIL 0 NIL)
NIL
> (quit)
.sa_413. date
Tue Dec 20 18:27:36 PST 1994

Lyman S. Taylor

unread,
Dec 21, 1994, 2:16:52ā€ÆPM12/21/94
to
In article <3d8j9r$c...@celebrian.otago.ac.nz>,

Nick Mein <nm...@bifrost.otago.ac.nz> wrote:
>
>The run times speak for themselves.
>--

Point 1.

Ahem, could we *PLEASE* stick with the initial challenge. Which was to
measure PROGRAMMER PRODUCTIVITY! NOT I can write C code that executes faster
than Lisp code.

The MAJOR speed difference between these programers is that C's std library
has routines that can read/write blocks of bytes at a
time and Lisp's std. library doesn't. Big deal! Any decent Lisp vendor
could add that into a lisp implementation in a small amount of time.

However, as was expressed ( several people saying... "I don't normally do
file I/O in Lisp ... I had to look it up" ), doing file I/O is not something
most Lisp users preoccupy themselves with. Therefore, I don't expect very
many vendors to run out and add this "feature".

This is a difference in standard Libraries and has nothing to do with
programmer productivity unless that productivity is in a domain focused upon
by the standard Libraries. Picking tasks which lend themselves to the
strength of either language's standard Libraries does not provide us
with very much enlightenment.


Point 2.

The reason there is no "assert(data)" in the lisp code is that
that slot is automatically initialized when a new object is created.
( see the part in the slot declaration that says "initform". )

Point 3.

There is no command line testing in the lisp code because most lisp programs
are not invoked from the command line! There is no requirement that there
be some function "main" in a Lisp program.

How does the requirement that there be a function "main" increase programmer
productivity? In fact, "main" is really just a bothersome artifact in
this challenge since it is the "image class and its methods" that we are
trying to develop and test. This can be done in
'C' too.... using CodeCenter/ObjectCenter you could have skipped the "main"
junk and just written the 'C'/'C++' code.


>Although I did have to change the names offunctions
>load-image and save-image to get your program to run (on a Harlequin
>LispWorks system). Portability problems in the Lisp world?

Point 4.

Not really. However, the word "image" does have connotations in the "default"
Lisp world. Although these two function as not part of the ANSI standard
I think you'll find them in almost every vendor's implementation.
If the original class had used the slightly more descriptive
class name of 'bitmap' then perhaps the name clash wouldn't have occurred. ;-)


>I am not prepared to get into a debate on "exploratory programming" at the
>moment (is this a technical term with its own literature?). Certainly

Point 5.

In some sense "exploratory programming" == "rapid prototyping". You have
heard of the latter haven't you.

"Exploratory Programming" doesn't mean that you willy-nilly sit down at the
terminal and write you program. It means you have a not-so-flushed-out
design and you want to "flush out" the design by trying to implement
the not so well understood and/or novel parts of the design.

The model of development of "formally specify" then build ( namely the
"Waterfall" model software engineering) is clearly a "bust" in the
real world. It has not promoted any dramatic increases in programmer
productivity ( except in those projects that are "very well understood" ).
[ Go over software eng newsgroups and announce that the "Waterfall"
model is the "one true way and these new dynamic approaches to
development are bunk" and what the response. ]

That's because many projects that people what folks to code for aren't
very well understood. Which means it will be virtually impossible
to write a complete/accurate specification that will not have to grow
in an evolutionary fashion.... hence "exploratory programming".

What might be interesting for folks here to argue is how much of a
contribution the language versus the development environment gives to
"exploratory environment". For Common Lisp the environment and
Language have tended to "blur" together.

The impact on programming productivity that "exploratory programming"
environments has seem to be very relevent to the initial topic being discussed.

Well that seems to be a long enough post for today... ;-)


--

Lyman S. Taylor Comment by a professor observing two students
(ly...@cc.gatech.edu) unconscious at their keyboards:
"That's the trouble with graduate students.
Every couple of days, they fall asleep."

Lawrence G. Mayka

unread,
Dec 21, 1994, 10:45:20ā€ÆAM12/21/94
to
In article <D15BJ...@research.att.com> a...@research.att.com (Andrew Koenig) writes:

In article <3d86vh$1...@Mars.mcs.com> tm...@MCS.COM (Thomas H. Moog) writes:

> There is a way to measure language complexity to one significant
> digit: the size of a good manual for the language. The essentials for
> SmallTalk are covered in about 100 pages of the blue book by Goldberg
> (the rest is the "standard library" and details of implementation).
> The C++ ARM is about 400 pages and is much more difficult reading then
> Goldberg. How large is a Lisp manual ?

The Common Lisp manual is about 1,100 pages.

Mr. Moog is not including in his calculation the standard libraries,
only the "essentials" (i.e., the minimal language portion necessary to
derive the rest). Of Steele's "Common Lisp: the Language - Second
Edition," I would include as essential in this sense roughly the first
11 chapters--less than 300 pages--with some crossover in both
directions. (Some material in these chapters pertains to library
datatypes/functionality/forms, and some small portion of the material
in the rest of the book might be considered essential.)

Andrew Koenig

unread,
Dec 21, 1994, 12:35:25ā€ÆAM12/21/94
to
In article <3d86vh$1...@Mars.mcs.com> tm...@MCS.COM (Thomas H. Moog) writes:

> There is a way to measure language complexity to one significant
> digit: the size of a good manual for the language. The essentials for
> SmallTalk are covered in about 100 pages of the blue book by Goldberg
> (the rest is the "standard library" and details of implementation).
> The C++ ARM is about 400 pages and is much more difficult reading then
> Goldberg. How large is a Lisp manual ?

The Common Lisp manual is about 1,100 pages.
--
--Andrew Koenig
a...@research.att.com

John Nagle

unread,
Dec 21, 1994, 7:24:51ā€ÆPM12/21/94
to
s...@elsegundoca.ncr.com (Stan Friesen) writes:
>In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
>|> >
>|> > This is certainly NOT going to get the most effective use out of C++.
>|>
>|> Who cares? My main concern is to get the program working, not half working
>|> with lots of gotchas and memory leaks. I've seen too many C++ programs in
>|> this state. I'm wide open to suggestions as how to get the most effective
>|> use out of C++.

>Well, start with using a C++ style library, and then design your
>solution to make effective use of that library.

>Since it is being included in the standard, the STL is a good choice.
>This is a very general library with a large number of different containers,
>iterators, and many, many algorithms *using* said containers.

Is STL really going in? It isn't in Plauger's book of the
draft C++ Standard Library.

John Nagle

Henry G. Baker

unread,
Dec 21, 1994, 2:39:53ā€ÆAM12/21/94
to
In article <D15BJ...@research.att.com> a...@research.att.com (Andrew Koenig) writes:

I know. :-( It weights 2 lbs., 12 oz. If you drop it out of a window
onto someone, it would probably kill them.

It _is_ in WWW hypertext on the Inet, though, at CMU. Check it out.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hb/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hb/hbaker/home.html
************* Note change of address ^^^ ^^^^
(It _is_ accessible, but Netcom is loaded; keep trying.)

John Nagle

unread,
Dec 21, 1994, 2:06:52ā€ÆPM12/21/94
to
dhanley@picasso (David Hanley) writes:
>: for (p = data; p != data + width * height; p++)
>: *p = 255 - *p;
>: }

> This could be written *much* faster, to wit:
>int *p, e = ( int * ) ( data + width * height );
>assert( p != NULL ); //Much nicer this way.
>for( p = (int*) data ; p < e ; ++p)
> *p ^= 0xffffffff;

That's illegal in C++. See the ARM, section 5.4. You can't convert
a "char *" to an "int *". Some compilers (gcc, for one) allow this, but
they're wrong. Symantec C++ 7.0 gets it right:

Error: cannot implicitly convert
from: char *
to : int *

Actually, in C++, you can't even convert an "int *" to a "char *". C
allowed that, but C++ doesn't. You can force the issue by going through
a "void *", but the resulting pointer may not be valid for dereferencing.
Properly, a "void *" has the alignment restrictions of objects. Remember,
on many machines, the hardware won't operate on misaligned pointers.

John Nagle

Michael Callahan

unread,
Dec 21, 1994, 9:16:22ā€ÆPM12/21/94
to
Nick Mein (nm...@bifrost.otago.ac.nz) wrote:

: From: call...@xmission.com (Michael Callahan)

: > I do most of my programming in C and C++, although I would rather be working
: > in Lisp.

: > the C++ version had a lot of junk in it.

: Thanks for the insightful and informative comment.

I'll stand by that comment. Your program, while being nice from an abstract
design viewpoint, contained many many unnecessary lines of code. I think
in this case, just the shear size of your code made it unnecessarily
"complex".

: > main (int argc, char **argv)


: > {
: > FILE *in = fopen(argv[1]);
: > FILE *out = fopen(argv[2]);
: > unsigned char buffer;
: >
: > while (!feof(in)){
: > fread(buffer, 1, 1, in);
: > buffer = 255 - buffer;
: > fwrite(buffer, 1, 1, out);
: > }
: > }

: Oh, I see. Checking the number of command line arguments, checking
: the return value of system calls, returning a meaningful value to
: the system is a lot of junk.

: Making those 65536 calls to fread and fwrite was pretty smart - should
: really do wonders for performance.

: I'd prefer it if you were working in Lisp as well.


On the other hand, it only took about a minute to type into this news editor,
bugs and all (it's just a prototype, after all...)
As for the 65536 calls to fread, I simply didn't care at this point. Sure,
you or I could easily make the size of the buffer be 256*256 and pull the
read and write out of the loop. What's the point? It still runs faster than
it would have taken for me to just type in your example. Besides, perhaps
1) I wanted it to look like the lisp example, and 2) I knew I had buffered
I/O, 3) Designed it in such a way that it would have been easy to change.


I'd just like to reiterate my previous but chopped out comment that this
isn't a particularly interesting example. The world doesn't need yet
another C++ image library. You would have been much better off taking
an existing library, such as ImageMagick(freely available with the X11
distribution), and adding the functionality you wanted.

Michael Callahan
http://www.xmission.com/~callahan

Martin Cracauer

unread,
Dec 22, 1994, 7:14:48ā€ÆAM12/22/94
to
tm...@MCS.COM (Thomas H. Moog) writes:

The *Common* Lisp Manual is about 1000 Pages. But there are Lisp
dialects that are much smaller. Scheme is one of the smalltest
(full-featured) language of the world and the manual is smaller than
the Index of CLtL2 (the Common Lisp manual), including a useful
Libarary.

Complexity of languages cannot be compared by the size of their
manuals. You'll never find a way to distinguish the base language from
the library.

As for productivity, I find it obvious that bad notes for C++ come
mainly from the lack of a standard library. This leads to:
- Too many people implement too much of their lib by themself
- Some don't make use of libs in places where they they would be
better than manual management. This applies mainly for container libs
vs. simple array work.

This is of course a major hassle for any non-high-end project.

You really should count only voices from people with correct use of
libaries.
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@wavehh.hanse.de> Fax +49 40 522 85 36

Andrew Koenig

unread,
Dec 22, 1994, 9:13:20ā€ÆAM12/22/94
to
In article <19941221.141340...@UICVM.UIC.EDU> dha...@matisse.eecs.uic.edu (David Hanley) writes:
> David Hanley (dhanley@picasso)

> In fact, even better:

> void Reverse( char *f1 , char *f2 )
> {
> FILE *in = fopen( f1 , "r" );
> FILE *out = fopen( f2 , "w" );
>
> int i;
> while( fread( &i , 4 , 1 , in ) == 1 )
> {
> i ^= 0xffffffff;
> fwrite( &i , 4 , 1 , out );
> }
> fclose( in );
> fclose( out );
> }

Not better, actually, because it relies on an int being 4 bytes
of 8 bits each. If you're going to use this strategy,
the most portable is probably something like

int c;

while ((c = getchar()) != EOF)
putc(~c);

If I were using a C++ system that supported STL (the new template
library that has been accepted as part of the C++ standard)
I might be tempted to write something like this:

input_iterator<char> in(cin), eof;
output_iterator<char> out;

transform(in, eof, out, flip());

where flip is defined as follows:

class flip: public unary_function<char, char> {
public:
char operator() (char) {
return ~x; // or whatever you like
}
};

The class flip is needed only because the library doesn't include a
similar class to do unary ~ directly. If you're willing to use unsigned
chars and settle for the transformation that takes x into
(~(unsigned char)0)-x, there is a cleverer way to do it:

input_iterator<unsigned char> in(cin), eof;
output_iterator<unsigned char> out;

transform(in, eof, out,
bind1st(minus<unsigned char>(), (~(unsigned char)0)));

I think this compares favorably in complexity with the Lisp versions.
I also think it proves nothing -- the problem is too simple and doesn't
seem to hit any major issues.
--
--Andrew Koenig
a...@research.att.com

Robert J. Brown

unread,
Dec 22, 1994, 9:54:19ā€ÆAM12/22/94
to
Michael Callahan (call...@xmission.com) wrote:
: Nick Mein (nm...@bifrost.otago.ac.nz) wrote:

: : From: call...@xmission.com (Michael Callahan)

: : > I do most of my programming in C and C++, although I would rather be working
: : > in Lisp.

: : > the C++ version had a lot of junk in it.

: : Thanks for the insightful and informative comment.

: I'll stand by that comment. Your program, while being nice from an abstract
: design viewpoint, contained many many unnecessary lines of code. I think
: in this case, just the shear size of your code made it unnecessarily
: "complex".

: : > main (int argc, char **argv)
: : > {
: : > FILE *in = fopen(argv[1]);
: : > FILE *out = fopen(argv[2]);
: : > unsigned char buffer;
: : >
: : > while (!feof(in)){
: : > fread(buffer, 1, 1, in);
: : > buffer = 255 - buffer;

^^^^^^ ---------->>> this should be ~buffer
since you do not know what the
implementation of arithmetic is
in general. It couold be 1's comp-
liment, signed magnitude, or even
packed decimal. The '~' operator
is *ALWAYS* going to produce the
1's compliment of the operand!
: : > fwrite(buffer, 1, 1, out);

Robert J. Brown

unread,
Dec 22, 1994, 9:47:59ā€ÆAM12/22/94
to
Stan Friesen (s...@elsegundoca.ncr.com) wrote:

: In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
: |> that should obviously be done in C. Here is my challenge:
: |>
: |> "You are given spectral sound files representing frequency, amplitude,
: |> bandwidth and time of sounds picked up by hydrophones in the ocean. You are
: |> looking for torpedo sounds but the hydrophones pick up the sounds of the
: |> ship dragging the hydrophones and other miscellaneous noise in the ocean.
: |> Develop an existing standard blackboard system shell and source model and
: |> discover the blackboard level abstractions that allow the top level of the

In my experience, a neural network classifier would probabvly be a better
choice than a blackboard architecture and a set of brittle rules. Acoustic
signature recognition is one of the early classic successes of ANNs.

: |> blackboard to identify a torpedo source specified in the source model.

Jeff Dalton

unread,
Dec 22, 1994, 11:46:06ā€ÆAM12/22/94
to
I don't know C++, since it's too complex for me to want to learn it
until I have to.

I prefer to develop in Lisp than C; the results tend to be shorter
in terms of number of chars or lines of code but less efficient
than what I good C programmer would produce. Unfortunately, this
is just qualitative, but I've often found myself finished when
C (or equiv) programmers were still writing some storage management.
There are also cases when I could have used either language but
could so things w/ less work in Lisp.

Until I came to Edinburgh in 1983, I wrote far more in non-Lisp than
in Lisp, and so I'm pretty sure it's not just due to "what I'm used to".

-- jeff

Lawrence G. Mayka

unread,
Dec 22, 1994, 9:08:22ā€ÆAM12/22/94
to
In article <3d9uv4$r...@pravda.cc.gatech.edu> ly...@cc.gatech.edu (Lyman S. Taylor) writes:

The MAJOR speed difference between these programers is that C's std library
has routines that can read/write blocks of bytes at a
time and Lisp's std. library doesn't. Big deal! Any decent Lisp vendor
could add that into a lisp implementation in a small amount of time.

However, as was expressed ( several people saying... "I don't normally do
file I/O in Lisp ... I had to look it up" ), doing file I/O is not something
most Lisp users preoccupy themselves with. Therefore, I don't expect very
many vendors to run out and add this "feature".

Actually, the ANSI standard for Common Lisp includes READ-SEQUENCE and
WRITE-SEQUENCE, which are meant for this very purpose; but X3J13 added
these two functions rather recently, so vendors may not support them
portably yet.

Jeff Dalton

unread,
Dec 22, 1994, 11:52:52ā€ÆAM12/22/94
to
In article <3d75oq$r...@transfer.stratus.com> d...@phlan.sw.stratus.com writes:
>
>I don't think anyone would argue against the statement that a small program
>in C++ takes more design and planning than one in C, or Lisp. But when
>the program gets larger, the pieces tend to fit together with fewer
>deleterious interactions than programs written from a procedural
>mindset. On the other hand, Lisp takes a functional approach, with
>procedural features tacked on to the language after. My question
>really comes down to, "Does the functional approach of Lisp scale up
>as well as the OO approach of C++ (and other OO languages)."

But this is just false. You can easily write procedural of OO Lisp.
Moreover, most Lisp programs are *not* functional.

>I'm really talking less about the languages themselves than about the
>programming paradigms they are designed to facilitate, although
>inevitably, how well the language assists the user in holding to the
>paradigm is also relevent.

Common Lisp is, if anything, better at procedural and OO programming
than at functional programming.

-- jeff

Jeff Dalton

unread,
Dec 22, 1994, 12:04:44ā€ÆPM12/22/94
to
In article <3d9uv4$r...@pravda.cc.gatech.edu> ly...@cc.gatech.edu (Lyman S. Taylor) writes:
>In some sense "exploratory programming" == "rapid prototyping". You have
>heard of the latter haven't you.
>
>"Exploratory Programming" doesn't mean that you willy-nilly sit down at the
>terminal and write you program. It means you have a not-so-flushed-out
>design and you want to "flush out" the design by trying to implement
>the not so well understood and/or novel parts of the design.

Of course, this assumes that you can produce the 1st version quickly,
which *I*', more likekly to be able to do in Lisp than in C. Reasons

* Interavtive, incremental implementations.
* Date that's easy to inspect at run-time (e.g. date includes
its type and usually has a printed prerestnation (or else
works with inspect/describe).
* Don't have to worry about storage management.
* It's easy to write macros and higher-order functions.
(Lisp macros are more powerful than C macros).

-- jeff

m...@mole-end.matawan.nj.us

unread,
Dec 22, 1994, 12:49:58ā€ÆPM12/22/94
to
In article <LGM.94De...@polaris.ih.att.com>, l...@polaris.ih.att.com (Lawrence G. Mayka) writes:
> In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:
>
> In addition to initial implementation, what about maintainability? Which
> language is more reliably modified when the requirements evolve?
>
> I certainly agree that =evolvability= is perhaps the most important
> characteristic of a large application nowadays. ...

The first reuse of code in Version N is as the base of Version N + 1 .
--
(This man's opinions are his own.)
From mole-end Mark Terribile
m...@mole-end.matawan.nj.us, Somewhere in Matawan, NJ
(Training and consulting in C, C++, UNIX, etc.)

John R. Bane

unread,
Dec 22, 1994, 10:45:54ā€ÆAM12/22/94
to
In article <D16Ho...@lcpd2.SanDiegoCA.NCR.COM> s...@elsegundoca.ncr.com writes:
>
>Well, start with using a C++ style library, and then design your
>solution to make effective use of that library.
>
>Since it is being included in the standard, the STL is a good choice.
>This is a very general library with a large number of different containers,
>iterators, and many, many algorithms *using* said containers.
>
As a long-time Lisp and C hacker who has gone to great pains to learn C++,
I believe the STL is seriously flawed in that it forces the use of reference
semantics - all the STL containers own their objects and give out only
references to them, which makes building a container of references (i.e.
a list of pointers to objects) unnecessarily slow and syntactically
obnoxious. I would love to be proven wrong about this, but I don't think
it's going to happen.

Also, the base semantics of STL sequences make it essentially impossible
to implement them in terms of simple linked lists; the basic add-to-the-middle
operator is insert-before. This means the STL will give me a doubly-linked
list when I ask for a list<foo *> - 200% overhead is a bit much, even when
I'm willing to pay 100%.

Follow-ups to comp.lang.c++...
--
Internet: ba...@tove.cs.umd.edu
UUCP:...uunet!mimsy!bane
Voice: 301-552-4860

Stan Friesen

unread,
Dec 21, 1994, 3:45:38ā€ÆPM12/21/94
to
In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
|> >
|> > This is certainly NOT going to get the most effective use out of C++.
|>
|> Who cares? My main concern is to get the program working, not half working
|> with lots of gotchas and memory leaks. I've seen too many C++ programs in
|> this state. I'm wide open to suggestions as how to get the most effective
|> use out of C++.

Well, start with using a C++ style library, and then design your


solution to make effective use of that library.

Since it is being included in the standard, the STL is a good choice.
This is a very general library with a large number of different containers,
iterators, and many, many algorithms *using* said containers.

If you need matrix manipulation, there are commercial matrix libraries
for C++. (And indeed, general math libraries).

You will get far more effective use of C++ using STL than using a
Lispish library that does much the same sorts of things.

Memory leaks are relatively easy to deal with if you use a proper
constructor/destructor based resource allocation scheme. In most
cases a resource, such as memory, should be allocated in the constructor(s)
for a class, and released in the destructor. Even if it is not possible
to put the allocation in the constructors, the destructor should *always*
release it.

|> > Effective C++ programming style requires a different approach than
|> > Lisp programming - the best solution of a given problem is often
|> > different in the two languages.
|>
|> I think that I've tried this "C++ programming style" that you speak of and I
|> don't like it. I've read some of the C++ books. If I had to characterize
|> it I would say that this style (if I'm hearing you) is too static, and I
|> have to go way out of my way to write extraneous code to support this static
|> style, which introduces all sorts bugs and obscurities. That's why I've
|> changed to a more Lispy programming style. I find that it is quite
|> effective.

This sounds as if you never did quite learn how to use it appropriately.
The static *type* system is a very powerful debugging tool, if used
properly, and not bypassed.

But, to do this you first need to design your system so that it is
compatible with this approach. The design phase is critical. In
general, if you find yourself needing to "program around" the type
system, you are probably using the wrong design for C++.

Or, by "static" do you mean something else? [Certainly, dynamic lists
and the like are quite easily handled in C++, even polymorphic ones,
so I cannot see that you would mean that].

|> Well, I'm telling you from experience that the Formal Design approach for
|> the kinds of problem that I mention is going to fail you. You can sit in a
|> room and formally design all you want but you are not going to get anywhere
|> until you get some visualization of your data world.

There are lots of ways to do this, using existing tools and methods.
Once the visualization phase is complete, then go to the analysis
phase, and then the design phase. *Then* start coding.

David Hanley

unread,
Dec 21, 1994, 4:10:41ā€ÆPM12/21/94
to
Larry Hunter (hun...@work.nlm.nih.gov)
: I am reasonably experienced in lisp. The following program took me 2
: minutes and 49 seconds to design, code, debug and test. (I tested it about
: as much as you did -- it runs fine on a single example). Most of that time
: was spent looking up read/write-byte in the manual because I wasn't sure it
: did exactly what I wanted -- it does.

: I guess LISP is more like 20x more productive. And much easier to read,
: understand and debug.

Your program is not at all the same thing! It is similar to the
C program that I posted, except it is much slower. The C program also took
me two minutes to write.

Tom O Breton

unread,
Dec 22, 1994, 5:27:38ā€ÆPM12/22/94
to
One thing this thread has roundly demonstrated is that it is not easy to
compare productivity, particularly since people can interpret the same
requirements in widely different ways.

The first guy wrote a program that checked for several errors and
allowed different image sizes to be handled by simply changing height &
width. No _wonder_ that took longer! IMO that is entirely due to the
more ambitious approach, which swamps any differences due to language.

What _is_ important to me is not how fast one can bang out trivial code,
but how easily one can refine existing code.

IE, it may take me a few extra seconds to type

int
main( int argc, char** argv )

but the effect of that on my productivity is negligible. However, if
(say) changing a variable takes me 30 seconds to find every instance
(with editor help) and 30 seconds to recompile, that _does_ have a
significant effect on productivity.

Another important issue is how easy it is to provide the
"non-functional" stuff: safety and efficiency. For instance, now that
C++ has exception handling the first program need not have written out
all those checks against NULL and presumably would thereby have been
written about as fast as the others.

Tom

--
t...@world.std.com
TomB...@delphi.com: Author of The Burning Tower

Nick Mein

unread,
Dec 21, 1994, 10:01:21ā€ÆPM12/21/94
to

ly...@cc.gatech.edu (Lyman S. Taylor) wrote:

> Point 1.

> Ahem, could we *PLEASE* stick with the initial challenge. Which was to
> measure PROGRAMMER PRODUCTIVITY! NOT I can write C code that executes faster
> than Lisp code.

I was careful in my challenge _not_ to mention execution speed. However,
the issue was brought up by two of the Lisp programmers that responded.

> The MAJOR speed difference between these programers is that C's std library
> has routines that can read/write blocks of bytes at a
> time and Lisp's std. library doesn't.

Well, I don't know about your system, but I got the following times
(all on the same SparcStation. N = 1024. Times in seconds):

Lisp program (compiled):

load-image 27.9
invert 12.3
save-image 45.4

Total 85.6

My C++ program:

Total 1.8

David Hanley's new & improved C++ program:

Total 0.6

> Point 2.

> The reason there is no "assert(data)" in the lisp code is that
> that slot is automatically initialized when a new object is created.
> ( see the part in the slot declaration that says "initform". )

The reason that there should be an "assert(data)" is to catch any attempt
by a programmer to write something like:

(setf im (make-instance 'image))
(invert im)
(save-image im filename)
;; I've just processed & saved garbage.

> Point 3.

> How does the requirement that there be a function "main" increase programmer
> productivity?

it wasn't my intention to throw in a red herring. I was responding to
Marco's claim that "the CL program does more than the C++ program" by
pointing out something that the C++ program does that the Lisp one doesn't.

> Point 4.

Well, its getting a bit off topic. I'll leave it.

> Point 5.

> In some sense "exploratory programming" == "rapid prototyping". You have
> heard of the latter haven't you.

I could have imagined several possible meanings of "exploratory programming":

== rapid prototyping
== incremental development
== good old fashioned hacking

> The model of development of "formally specify" then build ( namely the
> "Waterfall" model software engineering) is clearly a "bust" in the
> real world.

I agree. To quote Booch: "Just because object oriented design embodies an
incremental, iterative process does not mean that one must abandon all
good management practices ... Analysis is still important, and the need
for a well articulated design does not disapear"

I happen to believe that C++ is well suited to expressing an object-oriented
design (as Marco demonstrated, the same design can be expressed in CLOS),
and well suited to incremental development.

On the other hand, "rapid prototyping" can be misused. From
call...@xmission.com (Michael Callahan) we have:

: Well, that's it. I don't know if it's a fair comparison, the C++ version
: had a lot of junk in it. If I were quickly prototyping in C:
: (Note the emphasis on quickly!)


: #include <stdio.h>

: main (int argc, char **argv)
: {
: FILE *in = fopen(argv[1]);
: FILE *out = fopen(argv[2]);
: unsigned char buffer;

: while (!feof(in)){
: fread(buffer, 1, 1, in);
: buffer = 255 - buffer;

: fwrite(buffer, 1, 1, out);
: }
: }

> What might be interesting for folks here to argue is how much of a
> contribution the language versus the development environment gives to
> "exploratory environment". For Common Lisp the environment and
> Language have tended to "blur" together.

Good point.

> The impact on programming productivity that "exploratory programming"
> environments has seem to be very relevent to the initial topic being
> discussed.

I agree.


--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.

Jim McDonald

unread,
Dec 22, 1994, 9:05:27ā€ÆPM12/22/94
to

[Apologies again for the length of this post, but it's hard to illustrate
some ideas with anything less than a transcript.]

In article <3d8j9r$c...@celebrian.otago.ac.nz>, nm...@bifrost.otago.ac.nz (Nick Mein) writes:
|>
|> Several people (Michael Callahan, Erik Naggum, Mike McDonald, Larry Hunter)
|> seemed to miss the point of my challenge. Of course I could have
|> written the toy program in a few lines of C. I tried to come up with a problem
|> that C++ seemed well suited to, but that did not take hours or pages
|> of code to solve. A real Image class would have other operations
|> defined than load, invert and save; and would be part of a much
|> larger program (and ideally reusable, so that it could be used in many such
|> programs).
|>
|> From: mar...@mosaic.nyu.edu (Marco Antoniotti)
|>
|> > 20 Minutes. I do not touch type. I tried to write a "multidimensional"
|> > 'read-sequence' (a function with that name will be in the upcoming CL
|> > ANSI, the first ANSI OO Language standard), decided it was not worth
|> > the effort, went back to the original version, corrected a few errors
|> > (there are still a few around), checked CLtL2 a couple of times
|> > (actually more) and looked up some documentation strings on my
|> > faithful AKCL.
|>
|> > Pretty good, isn't it? :)
|>
|> Yes, I'm impressed. Although I did have to change the names of functions


|> load-image and save-image to get your program to run (on a Harlequin
|> LispWorks system). Portability problems in the Lisp world?
|>

|> > Moreover, the CL program does more than the C++ program
|>
|> Could you be more specific? I don't read Lisp well enough to be able to
|> detect the extra functionality. Actually, I think that you have supplied
|> somewhat less than I have. Your image class (which, admittedly, is the
|> significant part of the example) is fine, but your test function does not
|> perform the command line parsing and handling of errors performed
|> by main.

Common Lisp implementations tend to provide a lot of such error handling
automatically--in fact much of that handling is specified by the ANSI
standard, although offhand I'm not sure where the exact boundaries are.

For example, WITH-OPEN-FILE handles a much wider range of errors than
your main program, in a more graceful manner:

> (with-open-file (s "foo.baz") (format t "[Form read is ~S]" (read s)))
>>Error: Cannot open file "foo.baz" - No such file or directory

OPEN:
Required arg 0 (PATHNAME): "foo.baz"
Keyword arg 1 (DIRECTION): :INPUT
Keyword arg 2 (ELEMENT-TYPE): STRING-CHAR
Keyword arg 3 (IF-EXISTS): NIL
Keyword arg 4 (IF-DOES-NOT-EXIST): :ERROR
Keyword arg 5 (ATTRIBUTES): NIL
:C 0: Use a new pathname
:A 1: Abort to Lisp Top Level

-> :c
Use a new pathname
Filename (default is "/usr/home/kestrel/mcdonald/foo.baz"): hack.lisp
[Form read is (IN-PACKAGE "USER")]

|> Also (maybe significantly) there does not appear to be the equivalent
|> of my "assert(data)" - an important check that some client of the class
|> will not get away with doing something stupid - like manipulating an image
|> that hasn't been loaded yet!

In lisp, runtime type-checking catches many such errors, entering an
interactive debugger from which repairs can be attempted.

Using the quick hack I wrote:

> (read-image ".cshrc")
>>Error: End of file on stream #<Stream OSI-BUFFERED-STREAM "/usr/home/kestrel/mcdonald/.cshrc" EA05FE>

READ-BYTE:
Required arg 0 (STREAM): #<Stream OSI-BUFFERED-STREAM "/usr/home/kestrel/mcdonald/.cshrc" EA05FE>
Optional arg 1 (EOF-ERROR-P): T
Optional arg 2 (EOF-VALUE): NIL
:C 0: Return the given eof-value: NIL
:A 1: Abort to Lisp Top Level

-> :c
Return the given eof-value: NIL
>>Error: The value NIL, given to LUCID::|SETF of AREF (rank=2)|,
is the wrong type for storing into a (UNSIGNED-BYTE 8) array.

LUCID-RUNTIME-SUPPORT:SET-2DIM-AREF-SUBR:
:C 0: Supply a new value:
:A 1: Abort to Lisp Top Level

-> :a
Abort to Lisp Top Level
Back to Lisp Top Level

In this case I knew that using NIL for the read-byte would lose, but I
proceeded anyway to show what happens when you try to process garbage.
Lisp is vastly less likely to allow you to process garbage silently
than a corresponding C/C++ program.

...

|> Oh, I see. Checking the number of command line arguments, checking
|> the return value of system calls, returning a meaningful value to
|> the system is a lot of junk.

Well, it is nicer to get all of that automatically without having
to clutter your code with it:

> (read-image "/tmp/hack" 47)
>>Error: READ-IMAGE called with 2 arguments, but only 1 argument is allowed

READ-IMAGE:
:C 0: Ignore extra arguments
:A 1: Abort to Lisp Top Level

->

...

|> From: Erik Naggum <er...@naggum.no>
|>
|> > the functions are also moderately fast.
|>
|> > * (time (make-image "/tmp/test"))
|> > Compiling LAMBDA NIL:
|> > Compiling Top-Level Form:
|>
|> > Evaluation took:
|> > 0.27 seconds of real time
|> > 0.26 seconds of user run time
|> > 0.01 seconds of system run time
|> > 0 page faults and
|> > 1344 bytes consed.
|> > NIL
|> > * (time (invert-image "/tmp/test" "/tmp/test.out"))
|> > Compiling LAMBDA NIL:
|> > Compiling Top-Level Form:
|>
|> > Evaluation took:
|> > 0.56 seconds of real time
|> > 0.54 seconds of user run time
|> > 0.02 seconds of system run time
|> > 0 page faults and
|> > 3944 bytes consed.
|> > NIL
|>
|> > CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.
|>
|> Depends what you mean by fast, I suppose.
|>
|> time invert test.img out.img
|> 0.0u 0.0s 0:00 100% 57+253k 0+12io 0pf+0w

Well, 6 minutes (or whatever it took Erik) + .27 second + .56 seconds
certainly seems like a win over 60 minutes + 0.0 seconds.
As I mentioned in my message, you underspecified the spec for your
program. Certainly (to me at least) you did not imply anything about
how fast it should run.

...

|> From: mcdo...@kestrel.edu (Jim McDonald)
|>
|> > There are at leas a couple of problems with this challenge.
|>
|> > The main problem is that you have given an overly specified problem.
|>

|> I was hoping that people would (if they took the challenge seriously at all)
|> look at the spirit of the challenge without me laying down the rules
|> explicitly. Marco Antoniotti did so, and so did you with your second & third
|> versions. Thanks.

I tried, but to be honest, I really didn't know what you wanted--it wasn't
an idle complaint.

|> > The second problem is that the task you chose to measure is very low
|> > level and doesn't really use any interesting features of a language--
|> > it's almost nothing more than a few calls to the OS.
|>

|> But my example does use an interesting feature of C++ - encapsulation.

Ok, but the motivation does seem a little obscure for this task.
(I'm not saying encapsulation is bad, or even that it's inappropriate
for this task, just that you probably have some unspecified intentions
for this task that make it desirable. Not knowing those intentions,
it's hard to say if your code was good, bad, or indifferent wrt them.)

|> > I'll do three versions:
|> > one very simple and close to your example, a second slightly
|> > more abstract, and a third closer to the problem I've given
|> > above.
|>

|> > SECOND VERSION
|> > I used parameterized types and sizes, 2D arrays, and added an
|> > explicit test routine.
|>

|> This sounds a bit closer to my version.

I could quibble, but fair enough.

|> Adding a header to the image
|> file didn't seem to add anything interesting to the problem.

It was a step towards the third version where the data has a variable
format, given by the header. Again, I ask you how long it would take
to revise your program to accept an array of arbitrary byte formats,
e.g. 128-bit bytes, with any number of dimensions, e.g. 256 * 256 * 4 * 2,
where the formats are given in the data file. Such a capability seems
to be far more object-oriented than the minimal encapsulation you used.
I think your little *p hack becomes problematic.

But to return to my point about specs, I probably have a completely
different notion than you of where this thing would be headed, so I'm
not sure if any further comparison (or even this comparison!) is
meaningful.

|> > I'll do three versions:

..


|> > Elapsed Real Time = 5.14 seconds
|> > Total Run Time = 5.09 seconds
|> > User Run Time = 5.06 seconds
|> > System Run Time = 0.03 seconds
|> > Process Page Faults = 12
|> > Dynamic Bytes Consed = 131,088
|> > Ephemeral Bytes Consed = 5,168

|> The run times speak for themselves.

They most certainly do not. I was asked to quickly write a program
that did a simple task, which I accomplished. As an afterthought,
I timed it a couple of different ways, just to illustrate that it
wasn't impossibly slow.

While I was writing the code I did several things that I knew full
well were incredibly inefficient at runtime. For example, I read
each byte with a separate system call, which is obviously ridiculous
if performance is an issue. (I did even more egregious things in
the third version.)

Another ten minutes of hacking on the version most analogous to your
version gives this result on a Sparcstation 2:

> (time (write-image "/tmp/x2" (invert-image (read-image "/tmp/x1"))))
Elapsed Real Time = 0.22 seconds
Total Run Time = 0.14 seconds
User Run Time = 0.12 seconds


System Run Time = 0.02 seconds

Process Page Faults = 44


Dynamic Bytes Consed = 131,088

Ephemeral Bytes Consed = 3,568
#<Simple-Vector (UNSIGNED-BYTE 8) 65536 114A926>
>

Most of my time on that was wasted due to my not having a manual
handy, so I had to experiment a bit to find the right invocations
to make it run faster. Invert-image by itself takes about 0.04
seconds for a 256*256 array, and with a quick hack that is about
as [un]safe as your *p loop, I get a version of invert-image
that runs in 0.01 seconds by processing 4 bytes at a time.

With a manual I could reduce both this and the overall time
further, but for me at least I've already gone far beyond the
point of diminishing returns, since I'll probably never run this
thing again.

If I really cared fantastically about speed I'd spend a few hours
to add a compiler-macro to produce hand-coded optimal assembly
code for the target machine whenever the compiler saw the pattern
to be optimized. That tends to create programs a few per cent
faster than C compilers can create.


Jim McDonald

unread,
Dec 22, 1994, 9:26:24ā€ÆPM12/22/94
to

In article <D18H2...@world.std.com>, t...@world.std.com (Tom O Breton) writes:
|> One thing this thread has roundly demonstrated is that it is not easy to
|> compare productivity, particularly since people can interpret the same
|> requirements in widely different ways.
|>
|> The first guy wrote a program that checked for several errors and
|> allowed different image sizes to be handled by simply changing height &
|> width. No _wonder_ that took longer! IMO that is entirely due to the
|> more ambitious approach, which swamps any differences due to language.

Excuse me? The "first guy" (Nick) wrote a program that checked for about
1/100 of the errors that the lisp versions looked for, and was not
parameterized to handle different dimensions (e.g. 100 * 100 * 3) or
different byte sizes (e.g. 4 bit or 16 bit or 1024 bit bytes), and it
still took him about four times longer to write.

He wrote a *less* ambitious program and it took him longer. Now, what
does that say about the languages? [Actually, not much, I fear, given
the trivial nature of the task.]

|> What _is_ important to me is not how fast one can bang out trivial code,
|> but how easily one can refine existing code.

Which is why I asked Nick to revise his to accept a header format
in his data files to specify the data format and to allow structured
modifications to the internal array (e.g. invert just the first
plane). That's already present in the version I wrote that took
half the time he spent for the trivial version.

|> IE, it may take me a few extra seconds to type
|>
|> int
|> main( int argc, char** argv )
|>
|> but the effect of that on my productivity is negligible. However, if
|> (say) changing a variable takes me 30 seconds to find every instance
|> (with editor help) and 30 seconds to recompile, that _does_ have a
|> significant effect on productivity.
|>
|> Another important issue is how easy it is to provide the
|> "non-functional" stuff: safety and efficiency. For instance, now that
|> C++ has exception handling the first program need not have written out
|> all those checks against NULL and presumably would thereby have been
|> written about as fast as the others.

Very true. Part of the luxury of lisp environments is that the
exception handling systems have about 20 years of tradition and
experience to build on, providing them with mature capabilities.

Patrick D. Logan

unread,
Dec 22, 1994, 9:00:18ā€ÆAM12/22/94
to
>In article <3d86vh$1...@Mars.mcs.com> tm...@MCS.COM (Thomas H. Moog) writes:
>> The C++ ARM is about 400 pages and is much more difficult reading then
>> Goldberg. How large is a Lisp manual ?

In article <D15BJ...@research.att.com> a...@research.att.com (Andrew Koenig) writes:
>The Common Lisp manual is about 1,100 pages.

How much of the C++ ARM is used to describe the run-time library of functions
and how much is used to describe the language mechanisms themselves?

I would venture to guess a large portion of CLtL is used to describe
functions in what could be considered "the run time libraries". That is the
language as a core is not 1100 pages.

Consider that CLtL is describing lists, strings, hash tables, syntax
extension, read tables, the standard read macros, etc. All useful things that
go beyond what is offered by C++, per se.

More wars. Gotta love 'em.


Patrick...@ccm.jf.intel.com
Intel/Personal Conferencing Division
(503) 264-9309 FAX: (503) 263-3375

"What I envision may be impossible, but it isn't impractical."
-Wendell Berry

Devin Cook

unread,
Dec 23, 1994, 4:03:16ā€ÆAM12/23/94
to
>
>Again, it seems like finding a way to prove the productivity ratio is elusive.
>However the C++ advocates should start to suspect that if several people
>independently report there is in fact a productivity ratio in favor of Lisp
>then it probably is not a myth. I have yet to hear anyone report that they
>could develop faster in C++ than in Lisp (excepting of course those who are
>not Lisp fluent).
>
>--
>William P. Vrotney - vro...@netcom.com

Actually, if you ask the question are there things you can develope in C++ that
would be difficult if not impossible in Lisp?

Yes, of course. How about and utility under 100k? Also, CLOS is not rich
enough (IMHO) to work well with objects such as Windows GDI objects.

Lisp has it's place, but it is a small piece of the pie.


-- Devin


--
==========================================================================
| They told us of a second coming, | |
| So we keep looking towards the sky. | |
| But its not a Savior that we want, | |
| Just someone else to crucify. -- New Model Army | Devin Cook |
==========================================================================


Lawrence G. Mayka

unread,
Dec 23, 1994, 9:45:56ā€ÆAM12/23/94
to

In article <LGM.94De...@polaris.ih.att.com>, l...@polaris.ih.att.com (Lawrence G. Mayka) writes:
> In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

> In addition to initial implementation, what about maintainability? Which
> language is more reliably modified when the requirements evolve?

> I certainly agree that =evolvability= is perhaps the most important
> characteristic of a large application nowadays. ...

The first reuse of code in Version N is as the base of Version N + 1 .

I agree. A correct inference from that fact is that "Reuse begins at
home." That is, the first and greatest reuse is the ability to evolve
the application itself through time (whether by widely spaced and
strictly delimited versions, or by incremental update) economically,
smoothly, and rapidly.

Bernie Lofaso

unread,
Dec 23, 1994, 9:24:45ā€ÆAM12/23/94
to

O.K. I told myself I wasn't going to participate in this assumptotically
approaching silly discussion, but then I decided to include that in my
New Year's resolutions. So I guess it's o.k. to throw my 2 cents in. :-)


In article <D182M...@cogsci.ed.ac.uk> je...@aiai.ed.ac.uk (Jeff Dalton) writes:
Of course, I'm just one person, but I have written fairly large
amounts of code in assembler and in various higher-level languages
(such as PL/1 subset G and C) and I've written a Lisp-to-C compiler,
so I think I am pretty well placed to compare Lisp w/ other languages.

One would think that in performing productivity comparisons one would rely
on opinions of people with large (for some value of large) amounts of
experience in both languages. Therefore I would tend to give Mr.Dalton's
opinions a bit more weight since he purports to have such experience. In
addition, my limited experience indicates Lisp coding to go faster than
either C or C++. I retain some degree of skepticism, even though I agree
with the premiss of Lisp having a higher "productivity", because I am not
totally convinced that the practitioners proclaiming its superiority are
not (unintentionally) using test cases which are naturally more suitable
for Lisp than C or C++. I don't know many (read none) people who write Lisp
to interface to X-windows or make Sybase queries. These tasks are more what
I'd envision as being a part of real world problems, not writing blackboard
systems. The simple observation that people with much Lisp expertise had to
look up file I/O smells more than faintly of people who deal with
specialised programs that probably wouldn't meet most definitions of real
world programming. (Alert! Sweeping generalization here. Insert your own
doubt factor to achieve comfort.)

I would tend to agree with another common conception, that being that, in
general, Lisp programs are frequently less efficient than their C or C++
counterparts. Much of this conception, in my opinion, is due to the lack of
data structure richness in early Lisp dialects. Obviously, code
inefficiencies result if I must represent everything as a list rather than
say a struct. Common Lisp has gone a long way to enriching available data
structures and I think it likely that good CL programmers can approach C
and C++ efficiency in most programs. What they won't tell you when they
make their "I can be nearly as efficient as you" argument, is that there is
an additional level of effort involved and their productivity ratio suffers
greatly. There ain't no such thing as a free lunch - remember. You either
sweat the details and trade off speed for development time or you
don't. Arguments of CL programmers having it both ways should be HIGHLY
suspect.

Bernie Lofaso
Applied Research Labs

Fergus Henderson

unread,
Dec 23, 1994, 1:29:55ā€ÆAM12/23/94
to
na...@netcom.com (John Nagle) writes:

> Is STL really going in? It isn't in Plauger's book of the
>draft C++ Standard Library.

Yes, it is.

--
Fergus Henderson - f...@munta.cs.mu.oz.au

Jeff Dalton

unread,
Dec 22, 1994, 12:15:45ā€ÆPM12/22/94
to
In article <hbakerD1...@netcom.com> hba...@netcom.com (Henry G. Baker) writes:
>In article <D15BJ...@research.att.com> a...@research.att.com (Andrew Koenig) writes:
>>In article <3d86vh$1...@Mars.mcs.com> tm...@MCS.COM (Thomas H. Moog) writes:
>>
>>> There is a way to measure language complexity to one significant
>>> digit: the size of a good manual for the language. The essentials for
>>> SmallTalk are covered in about 100 pages of the blue book by Goldberg
>>> (the rest is the "standard library" and details of implementation).
>>> The C++ ARM is about 400 pages and is much more difficult reading then
>>> Goldberg. How large is a Lisp manual ?
>>
>>The Common Lisp manual is about 1,100 pages.
>
>I know. :-( It weights 2 lbs., 12 oz. If you drop it out of a window
>onto someone, it would probably kill them.

I believe that *properly presented* CL is not so bad. I have a fair
amount of experience in teaching and using CL and well as in devising
subsets (ISO stds work), so this is not just a guess. OTOH, I don't
think I've figured out the best way to present it yet.

However, much of CL can be seen as a library. For instance, all of
the multi-keyword functions can be seen this way. CL does not provide
a very good set of primitives, since the design is aimed at the
"right" user-level functionality. So there's plenty of room for
improvement. But if I want to test out something non-Lispy (eg
using sokets) I usually build something in Lisp + C (in AKCL, which
makes this fairly easy).

Of course, I'm just one person, but I have written fairly large
amounts of code in assembler and in various higher-level languages
(such as PL/1 subset G and C) and I've written a Lisp-to-C compiler,
so I think I am pretty well placed to compare Lisp w/ other languages.

-- jeff


Michael Callahan

unread,
Dec 22, 1994, 6:12:05ā€ÆPM12/22/94
to
Robert J. Brown (r...@wariat.org) wrote:
[chop]
: : : > main (int argc, char **argv)

: : : > {
: : : > FILE *in = fopen(argv[1]);
: : : > FILE *out = fopen(argv[2]);
: : : > unsigned char buffer;
: : : >
: : : > while (!feof(in)){
: : : > fread(buffer, 1, 1, in);
: : : > buffer = 255 - buffer;
: ^^^^^^ ---------->>> this should be ~buffer
: since you do not know what the
: implementation of arithmetic is
: in general. It couold be 1's comp-
: liment, signed magnitude, or even
: packed decimal. The '~' operator
: is *ALWAYS* going to produce the
: 1's compliment of the operand!
: : : > fwrite(buffer, 1, 1, out);
: : : > }
: : : > }

That's not true. You can do arithmetic on unsigned chars in C, it works
fine. I have yet to see a C system in which unsigned chars are not just
bytes.

You were close to finding the bug though. fread and fwrite require a
char *, not a char. It should have been:

while(!feof(in)) {
fread(&buffer, 1, 1, in);
buffer -= 255;
fwrite(&buffer, 1, 1, out);
}

Also, I believe initializing in and out where they are declared with
non static values is a C++ feature, not part of C. I just didn't want
to have to type as much in for this example. There's no reason to obscure
the basic algorithm with unrelated bits of code.

Michael Callahan

William Paul Vrotney

unread,
Dec 22, 1994, 6:40:18ā€ÆPM12/22/94
to
In article <3dc3iv$f...@wariat.wariat.org> r...@wariat.org (Robert J. Brown) writes:

> Stan Friesen (s...@elsegundoca.ncr.com) wrote:
> : In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
> : |> that should obviously be done in C. Here is my challenge:
> : |>
> : |> "You are given spectral sound files representing frequency, amplitude,
> : |> bandwidth and time of sounds picked up by hydrophones in the ocean. You are
> : |> looking for torpedo sounds but the hydrophones pick up the sounds of the
> : |> ship dragging the hydrophones and other miscellaneous noise in the ocean.
> : |> Develop an existing standard blackboard system shell and source model and
> : |> discover the blackboard level abstractions that allow the top level of the
>
> In my experience, a neural network classifier would probabvly be a better
> choice than a blackboard architecture and a set of brittle rules. Acoustic
> signature recognition is one of the early classic successes of ANNs.
>

Perhaps. But in the project cited training a NN would not be feasible.
Please, lets not start a NN debate!

Besides the point here was to offer a program requirements that would
illustrate the productivity ratio between Lisp and C++ better than a simple
BITBLTer.

Actually I would love to see a example that is more complex than the
BITBLTer but far less harder than my torpedo "research project". One thing
that I came up with is something that required lots of interactive testing
and evolution of pieces of a relatively simple but unknown algorithm. The
C++ challenger then would then be faced with having to recompile and run
over and over again increasing the ratio. The problem here of course is
that if the problem is unsolved the solution time is unknown and if the
problem is solved the C++ challenger would use the solved algorithm.

Harley Davis

unread,
Dec 23, 1994, 7:33:51ā€ÆAM12/23/94
to

In article <3de3ok$7...@news.u.washington.edu> d...@u.washington.edu (Devin Cook) writes:

Also, CLOS is not rich enough (IMHO) to work well with objects such
as Windows GDI objects.

Could you be more precise about this claim?

-- Harley Davis

--

------------------------------------------------------------------------------
Harley Davis net: da...@ilog.fr
ILOG S.A. tel: +33 1 46 63 66 66
2 Avenue GalliƩni, BP 85 fax: +33 1 46 63 15 82
94253 Gentilly Cedex, France url: http://www.ilog.fr/

Ilog Talk information: in...@ilog.com

Harley Davis

unread,
Dec 23, 1994, 10:58:26ā€ÆAM12/23/94
to

I don't know many (read none) people who write Lisp to interface to
X-windows or make Sybase queries. These tasks are more what I'd
envision as being a part of real world problems, not writing
blackboard systems.

Practically all of our clients do one or another of these things, or
both. (On the other hand, I only know of one client working on a
blackboard-like system.) Furthermore, they program the GUI/DB access
part of the application using C++ libraries which are automatically
interfaced into our Lisp, Ilog Talk.

Of course, in between accessing a database and displaying data on the
screen, there is a great deal of work to do. Our philosophy is that
C++ is a good language for writing well-understood,
performance-intensive libraries such as graphics libraries or general
database access libraries, while Lisp is a good language for rapid
prototyping and development of high-level or poorly-understood
problems. The important thing is to make Lisp and C++ work together
seamlessly. This is what we've tried to accomplish with Ilog Talk.

I would also point out that SmallTalk, which is basically Lisp with
another syntax, has carved out a considerable niche in exactly the
database/GUI market.

Nick Mein

unread,
Dec 23, 1994, 9:18:10ā€ÆPM12/23/94
to
Jim McDonald (mcdo...@kestrel.edu) wrote:

: Common Lisp implementations tend to provide a lot of such error handling


: automatically--in fact much of that handling is specified by the ANSI
: standard, although offhand I'm not sure where the exact boundaries are.

I still feel that there is an important difference between default error
handling (no matter how desirable this may be during exploratory
programming) and application-defined error handling.


: |> Also (maybe significantly) there does not appear to be the equivalent


: |> of my "assert(data)" - an important check that some client of the class
: |> will not get away with doing something stupid - like manipulating an image
: |> that hasn't been loaded yet!

: In lisp, runtime type-checking catches many such errors, entering an
: interactive debugger from which repairs can be attempted.

: Using the quick hack I wrote:

: > (read-image ".cshrc")

You appear to have missed the distinction between stuff-ups by the
programmer, and run-time errors. As I suggested in an earlier post, try
writing:

(setf im (make-instance 'image))
(invert im)

Does this error get caught?

: |> Oh, I see. Checking the number of command line arguments, checking


: |> the return value of system calls, returning a meaningful value to
: |> the system is a lot of junk.

: Well, it is nicer to get all of that automatically without having
: to clutter your code with it:

: > (read-image "/tmp/hack" 47)
: >>Error: READ-IMAGE called with 2 arguments, but only 1 argument is allowed

Again, you seem to be confusing stuff-ups by the programmer (calling a
function with the wrong number of parameters) with run-time errors
(running the program with the wrong number of arguments, or invalid
arguments; io errors, etc).

: Well, 6 minutes (or whatever it took Erik) + .27 second + .56 seconds


: certainly seems like a win over 60 minutes + 0.0 seconds.

Are you being serious?

|> I was hoping that people would (if they took the challenge seriously at all)
|> look at the spirit of the challenge without me laying down the rules
|> explicitly. Marco Antoniotti did so, and so did you with your second & third
|> versions. Thanks.

: I tried, but to be honest, I really didn't know what you wanted--it wasn't
: an idle complaint.

Fair enough - I've re-read my original post, and it wasn't too specific.
The challenge was to build a simple "image" class, implementing three
operations/methods (load, invert, save). The class was to be engineered
to a degree that it would be suitable as a component in a larger system.

: It was a step towards the third version where the data has a variable


: format, given by the header. Again, I ask you how long it would take
: to revise your program to accept an array of arbitrary byte formats,
: e.g. 128-bit bytes, with any number of dimensions, e.g. 256 * 256 * 4 * 2,
: where the formats are given in the data file. Such a capability seems
: to be far more object-oriented than the minimal encapsulation you used.
: I think your little *p hack becomes problematic.

If you like, email the file-format specs to me & I'll have a go and let
you know (next year some time).

: If I really cared fantastically about speed I'd spend a few hours


: to add a compiler-macro to produce hand-coded optimal assembly
: code for the target machine whenever the compiler saw the pattern
: to be optimized. That tends to create programs a few per cent
: faster than C compilers can create.

Well, this is getting very close to verifying my feeling that claiming
that "programming in Lisp is 2x to 10x more productive than programming
in C++ (regardless of the problem domain)" is simply not true.

Merry Christmas,

Nick.

Nick Mein

unread,
Dec 23, 1994, 9:24:47ā€ÆPM12/23/94
to
Jim McDonald (mcdo...@kestrel.edu) wrote:

: Excuse me? The "first guy" (Nick) wrote a program that checked for about


: 1/100 of the errors that the lisp versions looked for

My error handling was minimal, but what possible error(s) would my
program not have caught?

David Hanley

unread,
Dec 24, 1994, 5:01:41ā€ÆAM12/24/94
to
Andrew Koenig (a...@research.att.com) wrote:
: In article <19941221.141340...@UICVM.UIC.EDU> dha...@matisse.eecs.uic.edu (David Hanley) writes:

: Not better, actually, because it relies on an int being 4 bytes
: of 8 bits each.

I realize that, but one has to make certian assumptions
sometimes to make code that will execute okay. But I didn't quite
have to;

If you're going to use this strategy,
: the most portable is probably something like

: int c;

: while ((c = getchar()) != EOF)
: putc(~c);

Nahhhh. Much better to do this:

int c;

while( fread( &c , sizeof( int ) , 1 , in ) == 1 )
{
c = ~c;
fwrite( &c , sizeof( int ) , 1 , out );
}

Better yet, make it work with larger chunks.

Erik Naggum

unread,
Dec 24, 1994, 4:39:12ā€ÆAM12/24/94
to
[Nick Mein]

| My error handling was minimal, but what possible error(s) would my
| program not have caught?

now, why did that question make me shudder?

#<Erik>
--
requiescat in pace: Erik Jarve (1944-1994)

Erik Naggum

unread,
Dec 24, 1994, 4:36:08ā€ÆAM12/24/94
to
[Nick Mein]

| I still feel that there is an important difference between default
| error handling (no matter how desirable this may be during exploratory
| programming) and application-defined error handling.

but, Nick, don't you _understand_? in LISP, you get _both_. the default
error handling is just that: default. in C++, what is the default? a core
dump, the big red switch? and is aborting a good response to an error?
no, of course it isn't always good. even MS-DOS allows you to Abort, Retry
or Ignore, right? you C++ program does error handling worse than MS-DOS.

| As I suggested in an earlier post, try writing:
|
| (setf im (make-instance 'image))
| (invert im)
|
| Does this error get caught?

what error? this is a semantic error, right? just because C++ would allow
you to use a NULL pointer or somesuch, doesn't mean that one would not use
default initialization arguments in LISP. I'd imagine that you'd get an
inverted default image from evaluating these two forms.

| Again, you seem to be confusing stuff-ups by the programmer (calling a
| function with the wrong number of parameters) with run-time errors
| (running the program with the wrong number of arguments, or invalid
| arguments; io errors, etc).

so what's the difference between running a program and calling a function,
again? obviously, forking off a process and loading a code image is not
such a big difference that error handling must be different. just because
C++ doesn't allow you to automatically assign arguments of the correct type
to named variables but has to give you an array of pointers to strings that
you have parse yourself (and incur parsing errors as well), doesn't really
make it an advantage for C++ over LISP, you know.

[Jim McDonald]

| If I really cared fantastically about speed I'd spend a few hours to
| add a compiler-macro to produce hand-coded optimal assembly code for
| the target machine whenever the compiler saw the pattern to be
| optimized. That tends to create programs a few per cent faster than C
| compilers can create.

[Nick Mein]

| Well, this is getting very close to verifying my feeling that claiming
| that "programming in Lisp is 2x to 10x more productive than programming
| in C++ (regardless of the problem domain)" is simply not true.

then you haven't been listening. you have to do this assembly-level
hacking all the time in C++, while in LISP, you'd do it only when
necessary. again, the C++ defaults are wrong, on almost all levels.

but, perhaps more to the point, how do you program your optimizer in C++?
or do you have to resort to assembly-language code right where you need it?

It is loading more messages.
0 new messages