Reference Counting (was Searching Method for Incremental Garbage Collection)

49 views
Skip to first unread message

Tom O Breton

unread,
Nov 23, 1994, 1:26:06 AM11/23/94
to
OK, I've been following this thread for a bit, and there are two
objections to reference counting that IMO aren't as strong as they
appear. I await other opinions of course.

0: It orphans circular structures.

Well, if you do it naively, sure. But the way I've always handled such
memory is that there's a top structure (head pointer, usually) that owns
everything inside. Everything outside just borrows memory. Anything that
includes circular pointers is complex enough that it belongs in such an
encapsulation. Obviously you would count references to the head, not to
every member.

1: It's slow, due to count update overhead.

Again, I get the impression that what one has in mind is a very naive
implementation. Say, one that "thrashes" every time there's a loop that
copies and destroys a pointer. I have even heard people who consider
that finding the length of a list in Lisp inc/dec's for _every step_!
The purpose of reference counting is to handle "sibling" deep pointers
where you don't know which one will get destroyed first. Reference
counting anything whose order of destruction you can predict at compile
time seems pointless.

Tom

--
t...@world.std.com
TomB...@delphi.com: Author of The Burning Tower

Henry G. Baker

unread,
Nov 23, 1994, 1:42:07 AM11/23/94
to

Good ideas. Now can you formalize these rules & prove conditions
under which any & all garbage will be collected? I'm trying to
understand _provably safe styles_ of programming.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hbaker/home-page.html

Henry G. Baker

unread,
Nov 23, 1994, 8:23:02 PM11/23/94
to
In article <Czqu5...@world.std.com> t...@world.std.com writes:

>hba...@netcom.com (Henry G. Baker) writes:
>> Good ideas. Now can you formalize these rules & prove conditions
>> under which any & all garbage will be collected? I'm trying to
>> understand _provably safe styles_ of programming.
>
>Wow, we must really be approaching it from different perspectives.
>
>You seem to want a more formal(?) academic(?) statement, which I'm at a
>loss to frame. I probably have some implicit assumption, or assumptions,
>that you don't. To me, as a programmer, hearing or saying "there's a top
>structure that owns everything inside." expresses the idea precisely
>enough. I'm not sure what it is you want me to make more precise.

I'm not trying to be dense. What does 'own' mean? What does 'inside'
mean? How is ownership transferred and/or controlled? What happens
in the presence of exceptions? These are the kinds of things that
need to be made precise in order to convert good ideas into a robust,
workable _style_ of programming.

Tom O Breton

unread,
Nov 24, 1994, 12:02:26 AM11/24/94
to
hba...@netcom.com (Henry G. Baker) writes:
> I'm not trying to be dense.

Right, of course not. I ftp'ed a paper of yours yesterday, you're
obviously not dense.

> What does 'own' mean?

Like in real life, an exclusive association. If A owns something, B, C,
and D et. al. do not own it.

Or to put it another way, when resources such as memory are "owned",
they are controlled by a single thing, and that thing is the only thing
that acquires them and releases them. Non-owners might use the resources
with permission, but only the owner can release them.

If the owner does not disrupt the balance (IE, acquisitions exactly
match releases), and maintains everything inside in a predictable
acquisition-status (EG, a double-linked list might start and end empty,
add every node it allocates, and remove every node it deletes) then we
know that everything inside is not garbage.

The "predictable acquisition-status" need not be the same for everything
inside, it just has to be predictable. For instance, we might have a
convention that some pointers are valid and some aren't, and (of course)
only the valid pointers are in the acquired state. We just have to be
sure that we can always tell them apart, for instance by setting
pointers to a NULL value when we invalidate them.

> What does 'inside' mean?

Depends on exactly what your data structure is. Let's say it's a double
linked list. If you traverse the entire list, the nodes that you
encounter are inside, everything you don't encounter is outside.

But let me see if I can frame a more general answer: Inside == reachable
through a certain set of operations from a given starting point. EG, for
a dll, the operations might be get_prev and get_next and the starting
point might be the head node. Obviously you want to deal with sets of
operations that allow insideness to be controlled from a single point or
at least easily controlled.


> How is ownership transferred and/or controlled?

Transferred: The key is that a resource is owned by exactly one thing,
so if B wants to own something A owns, either A stops owning it (EG: the
node in question is removed from the dll), or B acquires something
equivalent (EG: B makes or receives another copy, which B then owns.)


Controlled: If I understand your question, it depends on what tools your
language gives you. Some languages of course will let you stomp all over
everything and never tell you. I prefer strong typing for control, where
a deep pointer is a different type than a shallow pointer, and only
legal operations can happen among them. But that's just a preference.

> What happens in the presence of exceptions?

Depends how far you unwind. If you unwind far enough that the owner
disappears, then it obviously can no longer own resources. Since it
should have no resources after it "dies", it must either release them or
give them to something else.

If you unwind out of acquisition/releasing routines, obviously you want
to correctly communicate in what state they left the resource.


> These are the kinds of things that need to be made precise in order to
> convert good ideas into a robust, workable _style_ of programming.

Thanks, that was much easier to answer.

Tom O Breton

unread,
Nov 23, 1994, 6:20:09 PM11/23/94
to
hba...@netcom.com (Henry G. Baker) writes:
> Good ideas. Now can you formalize these rules & prove conditions
> under which any & all garbage will be collected? I'm trying to
> understand _provably safe styles_ of programming.

Wow, we must really be approaching it from different perspectives.

You seem to want a more formal(?) academic(?) statement, which I'm at a
loss to frame. I probably have some implicit assumption, or assumptions,
that you don't. To me, as a programmer, hearing or saying "there's a top
structure that owns everything inside." expresses the idea precisely
enough. I'm not sure what it is you want me to make more precise.

Tom

Kevin Warne

unread,
Nov 25, 1994, 6:22:27 PM11/25/94
to
In article <CzpJ7...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>objections to reference counting that IMO aren't as strong as they
>appear. I await other opinions of course.
>
>0: It orphans circular structures.
<stuff about good programming techniques snipped>

>1: It's slow, due to count update overhead.

<more stuff about programming techniques snipped>

Of course, if you pare down the references to the minimum needed
and inc/dec as necessary, you'll have a fairly efficient system.
Unfortunately it won't be automatic.

I think it's important to maintain a distinction between reference
counting as accounting for explicit memory management and reference
counting collectors as a form of automatic memory management.

As a form of acounting, reference counts (or mark bits, or whatever)
are needed to track liveness. To properly deallocate memory, you
need this accounting information, the actual deletion code, and a
well-formed policy for your application about just who is responsible
for reclaiming memory. With allocated memory comes the responsibility
to deallocate that memory. That responsibility is handed off from
one part of the code to another (either explicitly or inside the
programmer's mind) until that object is deleted.

I believe this is what you were referring to when you wrote about
top structures and head pointers..

With an automatic memory manager, this responsibility along with the
actual deletion code stays inside the memory manager. The programmer
doesn't need to worry about transferring the responsibility to delete
objects because the memory manager will take of it.

So my take on the subject is as follows. If you're keeping track of
objects for explicit storage management (where you pass responsibility
for deleting objects back and forth) then reference-counting is yet
another tool for keeping track of accounting information. If you're
relying on your memory manager to keep track and deallocate dead
objects for you, then reference-counting isn't so hot...

Kevin

____
Kevin Warne / kev...@whistler.com / +1 (604) 932-0606
Suite 204, 1200 Alpha Lake Road, Whistler BC, CANADA, V0N 1B0
Warne's Garbage Collector is the best way to allocate memory in C++ under NT

Henry G. Baker

unread,
Nov 25, 1994, 8:10:07 PM11/25/94
to
In article <3b5rjj$k...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
>In article <CzpJ7...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>>objections to reference counting that IMO aren't as strong as they
>>appear. I await other opinions of course.
>>
>>0: It orphans circular structures.
><stuff about good programming techniques snipped>

There are some cases where one or more elements of the circular
structure can't be reclaimed for other reasons. A good example is the
case of 'properties' on 'property lists' of traditional Lisp (but not
usually Scheme) systems. Since someone can create a reference to a
symbol out of thin air (i.e., by calling '(read)', or by interning a
character string at run-time), symbols can _never_ be reclaimed by the
standard garbage collector if they have ever been given a print name.
(I have argued elsewhere that 'gensym' should _not_ produce symbols
with print-names, and the print-names should be given out the first
time one tries to print them.)

Therefore, the programmer will have to go to some explicit effort --
even in Lisp -- to reclaim certain structures, whether or not they
have cycles.

>>1: It's slow, due to count update overhead.
><more stuff about programming techniques snipped>
>
>Of course, if you pare down the references to the minimum needed
>and inc/dec as necessary, you'll have a fairly efficient system.
>Unfortunately it won't be automatic.

I'm not sure what you mean by 'automatic' in this context. If you
mean 'oblivious to the programmer's coding style', then I suppose that
you are right. But what system programmer is ready to give up his
right to explicitly manage certain resources in certain situations?

What _would_ be nice, would be some way to know at _compile time_ if
certain constraints were violated, so that one would immediately know
that his program would eventually fail at run-time due to premature
evacuation (GC), or a storage leak (object never gets reclaimed).

The 'linear' style of programming achieves this goal, and I expect
that there may be other styles of programming which have interesting
compile-time and/or run-time properties. I guess I'm not ready to
concede that all of the important memory management ideas were found
by 1960 (at least reference counting and GC; see CACM 1960 for both
Collin's and McCarthy's papers).

>As a form of acounting, reference counts (or mark bits, or whatever)
>are needed to track liveness. To properly deallocate memory, you
>need this accounting information, the actual deletion code, and a
>well-formed policy for your application about just who is responsible
>for reclaiming memory. With allocated memory comes the responsibility
>to deallocate that memory. That responsibility is handed off from
>one part of the code to another (either explicitly or inside the
>programmer's mind) until that object is deleted.

What seems to be needed is some way to indicate 'ownership
responsibilities' as part of the _type_ of an object, so that such
things can be useful at compile time. Obviously, if the rules are
simple enough for a programmer to learn and memorize, then there is
some hope that one could build them into compile-time tools to do the
same things.

>I believe this is what you were referring to when you wrote about
>top structures and head pointers..
>
>With an automatic memory manager, this responsibility along with the
>actual deletion code stays inside the memory manager. The programmer
>doesn't need to worry about transferring the responsibility to delete
>objects because the memory manager will take of it.

I'm not sure what fraction of all programmers is able (or ready) to
give up complete control of resource management to a 'black box'
system. One need only look to human economic systems to see that
'private ownership' of resources can often be considerably more
efficient in the utilization of the available resources than can
'system' (i.e., governmental) management.

Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hb/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hb/hbaker/home.html
************* Note change of address ^^^ ^^^^

Tom O Breton

unread,
Nov 25, 1994, 11:22:16 PM11/25/94
to
kev...@whistler.com (Kevin Warne) writes:
> Unfortunately it won't be automatic.
>
> I think it's important to maintain a distinction between reference
> counting as accounting for explicit memory management and reference
> counting collectors as a form of automatic memory management.

OK. Actually I agree. Doing it without a lot of programmer effort, and
without introducing error opportunities is important. But so is opening
opportunities for efficiency.

Can what I describe be automated, or largely automated? I think at best
it could be largely automated, requiring a keyword or so and giving
compiler errors under conflicting conditions rather than runtime
failures. (Obviously only applicable to compiled languages)

Some optimizations can be automatic, like hoisting reference count
updates out of loops, or to the endpoints of an entire routine.

Others, like constraining a structure to only take references it can
own, would require communication from the programmer, though I'd love to
be shown wrong.

Perhaps the ideal would be to develop something with GC and then turn it
off when (if) you got around to fine-tuning it.

Kevin Warne

unread,
Nov 26, 1994, 11:57:07 AM11/26/94
to
In article <CzuxH...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>Others, like constraining a structure to only take references it can
>own, would require communication from the programmer, though I'd love to
>be shown wrong.

I think that future programming languages will have constructs for compile-time
behavior as well as run-time behaviour. The compile time behavior will be a
superset of what is handled now with makefiles, macros, preprocessors, etc.
Through code that executes at compile-time, developers will be able to specify
exactly how their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now, probably not...


>Perhaps the ideal would be to develop something with GC and then turn it
>off when (if) you got around to fine-tuning it.

Waitasec... The implication here is that GC is and will always be slower than
hand-coding all the deletes. That's just not true.

1. A collector deallocates a large number of objects all at once. Hand-coded
deletes happen 1 at a time. This makes a difference when writing the
memory manager. The deallocation costs for a good collector are minimal.
(e.g., copying + non-copying collectors, implicit reclamation, etc.)
The deallocation costs for a good explicit memory manager are about the
same (or slightly higher) as the allocation costs.

2. Deallocation is part of the critical path when using explicit memory
management. A spreadsheet recalc that allocates a lot of temporary
memory is going to have to deallocate that memory. And the user is going
to have to wait through that deallocation. A collector can defer collection
to application idle time, further decreasing the cost of deallocation.

4. For short-lived processes, a collection might not be required at all.
This dynamic optimization isn't available to applications that hard-code their
deletes.

5. Collectors can be made parallel. Parallel collectors can run on separate
processors, with the appropriate speed benefits for otherwise singlethreaded
applications.

6. Collectors don't need to be black boxes for speed-hungry developers. There
are many collectors with very flexible interfaces. Developers can use their
knowledge of a collector's operation to further tune their application. And
a good collector should be flexible enough to coexist with an explicit memory
manager.

7. Collectors just get faster as users add more memory.

Now obviously the boundary between what is and is not GC is very fuzzy. As is the
boundary between what is and is not automatic.

But reguardless of where you draw the line, I suggest that most applications today
would benefit from the use of GC.

Kevin Warne

unread,
Nov 26, 1994, 12:42:32 PM11/26/94
to
In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
>(I have argued elsewhere that 'gensym' should _not_ produce symbols
>with print-names, and the print-names should be given out the first
>time one tries to print them.)
>
>Therefore, the programmer will have to go to some explicit effort --
>even in Lisp -- to reclaim certain structures, whether or not they
>have cycles.

Hmm, sounds like the granularity of reference/liveness is too coarse
in this case for proper collection (although it might be just right
for other reasons).


>I'm not sure what you mean by 'automatic' in this context. If you
>mean 'oblivious to the programmer's coding style', then I suppose that
>you are right. But what system programmer is ready to give up his
>right to explicitly manage certain resources in certain situations?

Indeed. Using a mixture of the two is probably the best policy right
now.


>The 'linear' style of programming achieves this goal, and I expect
>that there may be other styles of programming which have interesting

Are there any usuable languages yet for acutally programming in this
style? (I know... I should check out your FTP cache first before
asking...)


>compile-time and/or run-time properties. I guess I'm not ready to
>concede that all of the important memory management ideas were found
>by 1960 (at least reference counting and GC; see CACM 1960 for both
>Collin's and McCarthy's papers).

On the contrary. I think that memory management is just now beginning
to get interesting.


>I'm not sure what fraction of all programmers is able (or ready) to
>give up complete control of resource management to a 'black box'
>system. One need only look to human economic systems to see that
>'private ownership' of resources can often be considerably more
>efficient in the utilization of the available resources than can
>'system' (i.e., governmental) management.

That comparison is reversed (and probably not all that accurate anyways
as objects are a lot less complex than people).

I assume you're making a comparison between global optimization and
control as opposed to multiple distributed entities each optimizing
locally.

Using a collector, each method and object can locally allocate and use
memory without worrying about how to deallocate that memory. Memory
usage can become dynamic. For instance, a method may allocate a new
object and return that object as a matter of course. If the caller
doesn't use the returned object then it will be reclaimed automatically
at little cost.

Contrast this to existing C++ practice... The programmer (the gov't in
this example) has to set out a global allocation policy and manually
trace the lifespan of each object... If a class is reused in a
second application then this lifespan analysis has to be redone
and allocations retraced.

John Max Skaller

unread,
Nov 26, 1994, 10:52:49 AM11/26/94
to
>hba...@netcom.com (Henry G. Baker) writes:
>> Good ideas. Now can you formalize these rules & prove conditions
>> under which any & all garbage will be collected? I'm trying to
>> understand _provably safe styles_ of programming.
>
>Wow, we must really be approaching it from different perspectives.
>
>You seem to want a more formal(?) academic(?) statement, which I'm at a
>loss to frame. I probably have some implicit assumption, or assumptions,
>that you don't. To me, as a programmer, hearing or saying "there's a top
>structure that owns everything inside." expresses the idea precisely
>enough. I'm not sure what it is you want me to make more precise.

Yep. Bertrand Russell had one of them too. Only he spent
ages trying to make it precise -- and still only found out just
in time to slip a page into his book saying "Woops -- the whole
thing is flawed!"

You may think that a set is a collection of things,
but Russell showed vague, intuitive (naive) notions are misleading.

See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.

To compare a naive and formal statement, see the ARM
(naive) and the C++ Working Paper (attempts to be precise).
Examining those naive and hidden assumptions is important.

Helps in design too. Whenever I "feel" a piece of code
ought to work but I can't explain exactly why -- guess where
the bug pops up?

--
JOHN (MAX) SKALLER, INTERNET:max...@suphys.physics.su.oz.au
Maxtal Pty Ltd,
81A Glebe Point Rd, GLEBE Mem: SA IT/9/22,SC22/WG21
NSW 2037, AUSTRALIA Phone: 61-2-566-2189

Henry G. Baker

unread,
Nov 26, 1994, 2:23:49 PM11/26/94
to
In article <3b7pd3$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:
>In article <CzuxH...@world.std.com>, t...@world.std.com (Tom O Breton) says:
>>Others, like constraining a structure to only take references it can
>>own, would require communication from the programmer, though I'd love to
>>be shown wrong.
>
>I think that future programming languages will have constructs for compile-time
^^^^^^^^^^^^^^^^^^^^^^^^^^^

>behavior as well as run-time behaviour. The compile time behavior will be a
>superset of what is handled now with makefiles, macros, preprocessors, etc.
>Through code that executes at compile-time, developers will be able to specify
>exactly how their classes and objects can be used. Given such objects and
>metaobjects, I think such behaviour is possible... Right now, probably not...

Traditional Lisp macros, compiler-macros and advisors can already
fulfill some of these roles. It would be nice if we could better
formalize some of these tasks.

>Waitasec... The implication here is that GC is and will always be slower than
>hand-coding all the deletes. That's just not true.
>
>1. A collector deallocates a large number of objects all at once. Hand-coded

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This depends upon the collector, but is generally true.


> deletes happen 1 at a time. This makes a difference when writing the
> memory manager. The deallocation costs for a good collector are minimal.
> (e.g., copying + non-copying collectors, implicit reclamation, etc.)
> The deallocation costs for a good explicit memory manager are about the
> same (or slightly higher) as the allocation costs.

'Finalization' is becoming a more and more important issue, so the
interaction of GC and finalization needs to be clarified. With explicit
deallocation, the timing of finalization is usually better defined.

>2. Deallocation is part of the critical path when using explicit memory
> management. A spreadsheet recalc that allocates a lot of temporary
> memory is going to have to deallocate that memory. And the user is going
> to have to wait through that deallocation. A collector can defer collection
> to application idle time, further decreasing the cost of deallocation.

There's no reason why 'lazy' and/or parallel deallocation can't be done
with explicit deallocation.

>4. For short-lived processes, a collection might not be required at all.
> This dynamic optimization isn't available to applications that hard-code their
> deletes.

You have just made a pretty good argument for more tailored memory
management strategies.

>5. Collectors can be made parallel. Parallel collectors can run on separate
> processors, with the appropriate speed benefits for otherwise singlethreaded
> applications.

Other kinds of memory management schemes can also be made parallel.
'Decomposers' can easily be run as a separate thread -- especially
in the context of 'lazy' deallocation.

>6. Collectors don't need to be black boxes for speed-hungry developers. There
> are many collectors with very flexible interfaces. Developers can use their
> knowledge of a collector's operation to further tune their application. And
> a good collector should be flexible enough to coexist with an explicit memory
> manager.

Good idea. Now how to develop good, high-level protocols for communication
with an automatic resource manager.

>But reguardless of where you draw the line, I suggest that most applications today
>would benefit from the use of GC.

Although much work has been done on GC for 'distributed' applications,
I don't think that the benefits of GC for these applications have been
so clearly demonstrated yet.

John Ellis

unread,
Nov 26, 1994, 5:51:07 PM11/26/94
to
John Max Skaller writes:

See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.

Just a minor correction: We designed a safe subset that would work
with a wide range of garbage collection algorithms, not just
conservative algorithms.

Erik Naggum

unread,
Nov 26, 1994, 6:20:18 PM11/26/94
to
[John Ellis]

pardon me for asking the seemingly obvious, but where would I find this
paper?

#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.

Kevin Warne

unread,
Nov 26, 1994, 6:35:38 PM11/26/94
to
In article <hbakerCz...@netcom.com>, hba...@netcom.com (Henry G. Baker) says:
>There's no reason why 'lazy' and/or parallel deallocation can't be done
>with explicit deallocation.

Unfortunately the code required to signal 'delete me please' takes
about as long to execute as does the average case deallocation.
(Assuming the presence of a slick memory manager that will delete objects
in ~ 15 cycles) With a collector this overhead doesn't exist.


>You have just made a pretty good argument for more tailored memory
>management strategies.

There are many different techniques for creating and destroying objects.
Using a GC adds a few techniques. By having access to and using both
automatic and explicit memory managers, developers should be able to
create better applications as a result.


>Although much work has been done on GC for 'distributed' applications,
>I don't think that the benefits of GC for these applications have been

I think there's a lot more work to do to create decent environments
for creating distributed applications. Making a distributed GC work
well would be one step. It's so easy in a distributed environment to
orphan processes and objects, it'd be nice to have those partitioned
objects to take care of themselves.

Scott McLoughlin

unread,
Nov 26, 1994, 3:59:26 PM11/26/94
to
kev...@whistler.com (Kevin Warne) writes:

> Contrast this to existing C++ practice... The programmer (the gov't in
> this example) has to set out a global allocation policy and manually
> trace the lifespan of each object... If a class is reused in a
> second application then this lifespan analysis has to be redone
> and allocations retraced.
>

Howdy,
Well put. I think it might be important here on comp.lang.c++
to emphasize that GC is not just (1) lazy programming or even
(2) useful for implementing certain algorithms. (Both might be true
for specific cases).
GC, i.e. "automatic storage management", is a key player
in how easily system modularization and code reuse is achieved. In
my experience at various run-o-the-mill coding shops, modularization
and reuse has often broken down over issues of resource management.
For these non-rocket scientists, GC would have been a god send and
would have helped them leap the barrier between OO hype and practice.
OTOH, I do not think C/C++ should be retrofitted with
GC - breaks the spirit of the langauges. Which leaves a burning
question.....

=============================================
Scott McLoughlin
Conscious Computing
=============================================

Message has been deleted

Rick Busdiecker

unread,
Nov 27, 1994, 9:20:52 PM11/27/94
to Kevin Warne
In article <3b7pd3$9...@scipio.cyberstore.ca> kev...@whistler.com (Kevin Warne) writes:

I think that future programming languages will have constructs for
compile-time behavior as well as run-time behaviour. The compile
time behavior will be a superset of what is handled now with
makefiles, macros, preprocessors, etc. Through code that executes
at compile-time, developers will be able to specify exactly how
their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now,
probably not...

What you are describing is not `future programming languages'. Such
languages exist. One, called Common Lisp, was actually getting a fair
amount of use a decade ago. Unfortunately, most people gave up on it
right about the time that significant improvements in compiler and
garbage collection technology were becoming standard for conventional
architecture Common Lisp implementations.

Code is data. Data is code. Lisp leverages this fact while glorified
assembly languages hide it. You should not need seperate languages
(such as preprocessor-speak) to define new code transformations.

--
Rick Busdiecker <r...@lehman.com> Please do not send electronic junk mail!
Lehman Brothers Inc.
3 World Financial Center "The more laws and order are made prominent, the
New York, NY 10285-1100 more thieves and robbers there will be." --Lao Tzu

Andrew Main

unread,
Nov 29, 1994, 3:17:50 PM11/29/94
to
In article <CzxA...@rheged.dircon.co.uk>,
si...@rheged.dircon.co.uk (Simon Brooke) writes:
>Look, we live in a world in which,
>
>(i) according to all the trade comics, there is a huge backlog of
>code waiting to be written (although this may be because you get more
>kudos for specifying than for coding...);

That's my experience too. It's sad.

>(ii) the average C (or similar language) programmer spends more time
>debugging than coding (*);

Often true, but not by as large an extent as you might think.

>(iii) the overwhelming majority of bugs in C (or similar language)
>programs are problems with memory allocation (walking off the end of
>the world; early deallocation; leaks) (*);

Not if the programmer is careful. I hardly ever have this type of bug.

{...}
>((*) Asterisks indicate bald assertions: I can't cite anything to back
>these up, but it's my experience).
{...}
> But in
>these days of megabytes of core, hand crafting (for example) memory
>management is at best an almost masturbatory vanity, and at worst a
>gross waste of resources (since the time you waste getting the bloody
>thing to work is far more valuable than the totality of the processor
>time it will ever save).

Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.

>Using automatic store management of whatever kind removes the single
>major cause of error in code, and allows the programmer to concentrate
>on the logic of the program. It thus greatly speeds application
>development, and enhances application reliability.

As I've indicated above, memory management only a major cause of errors
if the programmer is sloppy. For good programmers, it's just another
arena for bugs to display themselves in.

>There is a place, of course, for low level languages like C:

C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.

> These
>things will be used by millions of users and will be required to run
>for long periods perfectly reliably. Such programs are rare,
>thankfully, because producing code of this quality is hard and far
>fewer people can do it than think they can.

Ideally, EVERY program would operate reliably for long periods of time.
Every programmer should aim for that, and not just when writing kernels.

>Building code is an engineering problem. Suppose you are building a
>vehicle for a speed record attempt, you might very well argue 'right,
>we'll weld this assembly by hand, because the very best craftsmen can
>work to closer tolerances than a robot can, and this assembly is
>critical'. However, if you were production engineering the next Ford
>Mondeo, and you suggested hand welding the body shell, you'd be out of
>a job.

A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.

-andrew

Peter Kron

unread,
Nov 30, 1994, 10:39:44 PM11/30/94
to
From: si...@rheged.dircon.co.uk (Simon Brooke)

> Look, we live in a world in which,
>
> (i) according to all the trade comics, there is a huge
> backlog of code waiting to be written (although this may
> be because you get more kudos for specifying than for
> coding...);
>
> (ii) the average C (or similar language) programmer spends
> more time debugging than coding (*);
>
> (iii) the overwhelming majority of bugs in C (or similar
> language) programs are problems with memory allocation
> (walking off the end of the world; early deallocation;
> leaks) (*);
>
> (iv) silicon is cheap and rapidly getting cheaper;
>
> (v) programmers are expensive and steadily getting more
> so.

>
> ((*) Asterisks indicate bald assertions: I can't cite
> anything to back these up, but it's my experience).

Very well put. Add to (iii)...more so if weighted by cost.
---
NeXTMail:peter...@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022

Marco Antoniotti

unread,
Dec 1, 1994, 9:58:49 AM12/1/94
to
In article <3bg29e$g...@holly.csv.warwick.ac.uk> cs...@csv.warwick.ac.uk (Andrew Main) writes:


From: cs...@csv.warwick.ac.uk (Andrew Main)
Newsgroups: alt.lang.design,comp.lang.c,comp.lang.c++,comp.lang.lisp
Date: 29 Nov 1994 20:17:50 -0000
Organization: University of Warwick, Coventry, UK
Lines: 76
NNTP-Posting-Host: holly-fddi.csv.warwick.ac.uk
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

...

Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.

^^^^

C/C++ aren't appropriate for most applications either :)

...

C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.

...

As do most serious Common Lisp implementations.

...

A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.

That is correct. I am just now writing a CL library that does need
explicit memory management for a lot of data structures I build "on
the fly", but that cannot be collected since they get memoized in a
has table. I can build it easily relying on the garbage collector on
as "as needed" basis.

Happy Lisping

Stephen Benson

unread,
Dec 3, 1994, 6:09:19 AM12/3/94
to

In article <1994Dec01....@corona.com>, Peter Kron (pk...@corona.com) writes:
>From: si...@rheged.dircon.co.uk (Simon Brooke)
>> Look, we live in a world in which,
.
.
.

>> (v) programmers are expensive and steadily getting more
>> so.

Not if you buy them in India or parts of eastern Europe. Or many other non-US,
non-Euro countries. (Read _The Decline of the American Programmer_ or whatever
it's called)

(They're paid pretty damn badly in the UK too -- but then just about everyone
is)

--
: stephen benson : : : : : : : : step...@scribendum.win-uk.net

Erik Naggum

unread,
Dec 3, 1994, 5:14:02 PM12/3/94
to
[Simon Brooke]

| Look, we live in a world in which,

:


| (v) programmers are expensive and steadily getting more
| so.

[Stephen Benson]

| Not if you buy them in India or parts of eastern Europe. Or many other
| non-US, non-Euro countries. (Read _The Decline of the American
| Programmer_ or whatever it's called)

Edward Yourdon: Decline and Fall of the American Programmer, ISBN 0-13-203670-3

last I heard, as in: when I got loads of mail from several Indian companies
thinking I'd like use their cheap Windoze programming services just because
my company has "software" in its name, they are inexpensive all right, but
also getting more expensive much faster than their American counterparts.
must be the rise in demand.

Yourdon basically claims that since they got into the programming business
much later than American programmers, they use CASE tools, which is
Yourdon's favorite solution to the software crisis. there is probably some
truth to his claims, and from what I have seen of the C++ world, it is nigh
impossible to write C++ code without a machine doing the work for you.

I have yet to see comparisons of productivity between a CASE/C++ programmer
vs a LISP programmer, but they claim five times increase in productivity,
and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
and ten times increase in productivity with LISP. such figures are
probably way too round to be accurate, but if there's truth to them, you'd
still be way ahead with a real programming language compared to a crippled
language, however good the crutches.

#<Erik>
--
The check is in the mail. This won't hurt. You'll find it on the Web.

Andrew Koenig

unread,
Dec 4, 1994, 11:06:50 AM12/4/94
to
In article <19941203T2...@naggum.no> Erik Naggum <er...@naggum.no> writes:

> Yourdon basically claims that since they got into the programming business
> much later than American programmers, they use CASE tools, which is
> Yourdon's favorite solution to the software crisis. there is probably some
> truth to his claims, and from what I have seen of the C++ world, it is nigh
> impossible to write C++ code without a machine doing the work for you.

Really? That's news to me.

> I have yet to see comparisons of productivity between a CASE/C++ programmer
> vs a LISP programmer, but they claim five times increase in productivity,
> and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
> and ten times increase in productivity with LISP. such figures are
> probably way too round to be accurate, but if there's truth to them, you'd
> still be way ahead with a real programming language compared to a crippled
> language, however good the crutches.

The closest thing I've seen to a believable study was by Howard Trickey,
who implemented a fairly substantial program (O(10K lines)) in Lisp and
C++ simultaneously and took notes. His conclusion was that in that
particular context, choice of language didn't affect his productivity much.

Yes, it's only one data point and yes one can find all kinds of flaws
in the methodology, but I haven't seen any evidence that isn't at least as flawed.
--
--Andrew Koenig
a...@research.att.com

Giuliano Procida

unread,
Dec 5, 1994, 8:55:21 AM12/5/94
to
If you are into this sort of thing, you may be interested in:

"Haskell vs. Ada vs. C++ vs. Awk vs. ... An Experiment in Software
Prototyping Productivity" by Paul Hudak and Mark P. Jones,

Available via anonymous ftp as:
ftp://nebula.cs.yale.edu/pub/yale-fp/papers/NSWC/jfp.ps

... in which there are also references to the original write-up of the
prototyping experiment.

Giuliano.

Lawrence G. Mayka

unread,
Dec 5, 1994, 8:55:53 AM12/5/94
to

I am obliged to point out that, given AT&T's very close past and
present relationship with C++ and strong private and public commitment
to it, any refutation of Mr. Trickey's ancient memorandum and
Mr. Koenig's comments above, including any contrary evidence, would
very likely be stamped AT&T Proprietary--Not for Public Disclosure.
AT&T employees cannot be considered unbiased, unconstrained
participants in discussions of this kind. I can only suggest that
readers learn CLOS and C++ for themselves, try both languages on
appropriate and significant applications, making use of development
environments appropriate to each, and then come to their own
conclusions.

If this thread is to continue at all, I suggest that it not be
cross-posted to widely disparate newsgroups.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Andrew Koenig

unread,
Dec 5, 1994, 12:27:09 PM12/5/94
to
In article <LGM.94De...@polaris.ih.att.com> l...@polaris.ih.att.com (Lawrence G. Mayka) writes:

> I am obliged to point out that, given AT&T's very close past and
> present relationship with C++ and strong private and public commitment
> to it, any refutation of Mr. Trickey's ancient memorandum and
> Mr. Koenig's comments above, including any contrary evidence, would
> very likely be stamped AT&T Proprietary--Not for Public Disclosure.
> AT&T employees cannot be considered unbiased, unconstrained
> participants in discussions of this kind.

No one is unbiased or unconstrained.

For the record, I will say that if a refutation of Trickey's paper
exists, I have not seen it. Moreover, I have no reason to believe
one exists, if only because it is hard to refute experience.

I will also say that I work on C++ because I believe it is a
worthwhile endeavor, not because anyone is forcing me to do so.
And it is *far* from the only programming language I speak fluently.
--
--Andrew Koenig
a...@research.att.com

Richard Billington

unread,
Dec 15, 1994, 10:39:04 AM12/15/94
to
Dick Gabriel said that Lucid's experience with developing their
"C" tools (Energize), their experience indicated at least a 3 fold
difference between using C and Lisp as development languages - the
increased being in lisp's favour (i.e., productivity was 3 to 1 improved
if one used lisp). I agree with LGM, this proves nothing, but is simply
some heresay which is contrary to Mr. Trickey's heresay.
--
Richard Billington
bu...@cc.gatech.edu

Erik Naggum

unread,
Dec 15, 1994, 3:49:09 PM12/15/94
to
[Richard Billington]

| I agree with LGM, this proves nothing, but is simply some heresay which
| is contrary to Mr. Trickey's heresay.

I have read Howard Trickey's report in SIGPLAN 23(2). "heresay" is a good
term to cover the article. there is not enough information in this paper
to conclude anything, not even restricted to the specific case under study.
the lack of rigor in the presentation of both arguments and conclusion, the
lack of _relevant_ detail in the description of each of the versions of the
program, and the unwarranted generalizations individually and together make
this paper utterly worthless. it is also very, very old, and while its
relevance may have been non-zero at the time it was written, today it is
not worth the paper it was printed on.

according to the author's own data, he has 42KLOC experience in procedural
languages (Pascal, C, PL/I, and PDP/11 assembly), and 3KLOC experience in
Franz Lisp. "Of course one reason why the two versions are about the same
size is that my desire to have identically behaving programs gave a strong
incentive to look at one version when coding the other. Perhaps a True
Lisper has a completely different, more compact coding style. ... I don't
think there are any places where one language would have dictated a vastly
different design than the other." the author is terminally confused on the
difference between language and implementation. "However, for finding
syntax errors, C++ has an [sic] big advantage. If a 500 line C++ file has
errors, you will find out about all of them (up to a limit) in about 20
seconds. The Lisp compilation aborts after he first error, so it can take
many minutes to find them all." this author is also hung up in the time
spent in garbage collection, without giving any kind of statistics for the
largely hidden memory allocation costs of C++. "A rather bigger graph took
109 seconds in C+= and 805 seconds in Lisp, because Lisp needed 16 garbage
collections to finish." the single most damaging aspectd of garbage
collection might indeed be that systems report the time spent in garbage
collection separately. a telling comparison is the following: "If you bind
new Lisp locals where logical, a function that is easily comprehensible in
C++ looks too long in Lisp." a fundamental difference between the two
languages is precisely how and when you create functions. the author
dismisses macros, or what I understand to be macros, anyway: "Perhaps a
Lispa advocate would counter that it isn't difficult to customize the Lisp
syntax to whatever one desires. My experience is that this isn't a good
idea, because tools like the debugger operate on a more canonical
representation so it is best to be familiar with your program in "standard"
Common Lisp. It is also to the benefit of other readers not to use a
strange syntax." it may seem that the author has failed to do the simplest
thing in comparing languages and compilers: turn on optimization. "The
compiler flags for both versions were as I set them for normal development:
-g for C++ and the defaults for Lisp. The C++ version could be sped up
using optimization (-O), an the Lisp version could be sped up using
`fast-entry' (avoids checks for the correct number of arguments)." the
author uses only a small fraction of the Lisp primitives: "It took me about
two weeks to design and implement C++ things that were primitives in Lisp;
after that, the added repertoire of Lisp didn't make much of a difference."

I conclude that we have a C/C++ pro and a Lisp amateur comparing apples and
oranges, and coming out with what he knows best. zero information value.
worse yet, those who refer to this paper must do so for political reasons,
or marketing, which amounts to the same thing, and they can hardly be
intellectually honest about it.

Cyber Surfer

unread,
Dec 16, 1994, 2:31:24 PM12/16/94
to
In article <19941215T2...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> dismisses macros, or what I understand to be macros, anyway: "Perhaps a
> Lispa advocate would counter that it isn't difficult to customize the Lisp
> syntax to whatever one desires. My experience is that this isn't a good
> idea, because tools like the debugger operate on a more canonical
> representation so it is best to be familiar with your program in "standard"
> Common Lisp. It is also to the benefit of other readers not to use a
> strange syntax." it may seem that the author has failed to do the simplest

There's a simple solution: document the macros and treat them as
additions to the language. Even C programmers do that! How else
can you use tools like lex, yacc, or embedded SQL? Someone had to
document these tools.

Forth, Lisp, and even Smalltalk programmers can look at the language
system they use for programming in a slightly different way, by seeing
that the system itself is a program written in the language it compiles.
(See PJ Brown's book Writing Interactive Language Compilers and
Interpreters for a definition of "compiler" or "interpreter".)

I often suspect that mistakes like these could be unfamiliarity
with compiler theory.
--
CommUnity: ftp://ftp.demon.co.uk/pub/archives/community
Me: http://cyber.sfgate.com/examiner/people/surfer.html

Message has been deleted

William Paul Vrotney

unread,
Dec 19, 1994, 12:39:31 AM12/19/94
to
In article <D0xAI...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:

> In article <BUFF.94De...@pravda.world>,

> Well, cf Erik Naggum's response to the above, Dick Gabriel in
> particular and the Lucid team in general must be taken as LisP
> experts, so that may to some extent weight against their experience as
> reported above. Nevertheless, if this statement is something Dick has
> published and provided some quantifiable support for, it would be
> extremely useful material. There is so *damn* *little* serious study
> of the comparative productivity of differing programming tools.

It seems like computer science is stuck with surveys here. Is there any
more scientific approach?

>
> No, I'm not expecting any study which "conclusively proves" that any
> particular language is "the best" for any but very special purposes
> (and indeed I'd take with a large pinch of salt any study which
> claimed to); but it's useful to hear from people who have a wide range
> of expertise in more than one language.
>
> My own view, for what it's worth, is that LisP is probably more like
> eight or ten times as productive as C, in terms of delivered user
> level functionality per programmer hour (I don't know enough C++ to
> comment). However I'm biased towards LisP; although I used BCPL before
> LisP, LisP is far and away my preferred language. My experience runs
> to about 80KLOC in C, probably five times that in various LisPs (but
> mainly Interlisp/LOOPS).
>

This is close to my experience. I have mentioned that there feels like a 5
to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
experts in both Lisp and C++ and the usual response is that it is too
conservative an estimate. One part of the big ratio is attributed to the
lack of a good C++ library. I created a Lispy C++ library for myself which
includes lists, dynamic typing and other nice Lispisms, and this helped
considerably but the productivity ratio is still pretty high. I think an
even bigger part of the ratio is due to the lack of the ability of C++ to do
what is referred to as exploratory programming. If exploratory programming
is important to developing the application then this makes the ratio really
high. For example, in developing my Go program in Lisp the ability to do
exploratory programming feels like it shoots up the ratio to more like 100
to 1, seriously!

This leads me to believe that for a complex enough application, like a Go
program, it is better to develop it in Lisp then recode in C++ in the last
two weeks before delivery. Or better yet compile efficiently (enough) in
Lisp when that becomes more feasible.

It is interesting to muse with the powerful software automation of the
"software ICs" paradigm, such as NEXTSTEP, in the future perhaps the only
interesting programming will be consuming software ICs, producing software
ICs and otherwise exploratory programming. Contrary to popular opinion,
perhaps Lisp (or Lisp like languages) will be one of the few surviving human
written programming languages of the future.
--
William P. Vrotney - vro...@netcom.com

Nick Mein

unread,
Dec 19, 1994, 8:05:53 PM12/19/94
to

Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
Vogt have written that programming in Lisp is 2 - 10 time more productive
than programming in C (two of the posters professed to having little
experience with C++). Surprisingly, no suggestion is made that this
productivity advantage is restricted to any particular problem domain (perhaps
Lisp is 10x more productive for an NLP system, but only 2x more productive
if you are writing a Unix device driver?).

Anyway, I found the claims rather surprising, so here is a challenge. The
following toy program loads a 256x256 8 bit grayscale image from a file,
inverts it, and saves it to another file. I consider myself to be an
intermediate-level C++ programmer. The program took me an hour (almost to
the minute) to design, code, debug and test (Ok, I didn't test it very much).
I look forward to seeing the previous posters' Lisp implementations of the
same. It should only take them 6 - 30 minutes to knock them together, less
if they are reasonably experienced in Lisp.


// Toy C++ program that reads 256x256 gray-scale image from a file,
// and writes out the "negative" of the image.

#include <stdio.h>
#include <assert.h>

#define MEM_ALLOC_ERROR -1
#define FILE_OPEN_ERROR -2
#define FILE_READ_ERROR -3
#define FILE_WRITE_ERROR -4

const int N = 256;

class Image
{
public:
Image();
~Image();
int LoadImage(char* filename);
int SaveImage(char* filename);
void Invert();

private:
int width;
int height;
unsigned char* data;
};

Image::Image()
{
data = NULL;
};

Image::~Image()
{
delete data;
}

int Image::LoadImage(char* filename)
{
FILE* f;

assert(data == NULL);

if ((f = fopen(filename, "rb")) == NULL)
return FILE_OPEN_ERROR;

width = N;
height = N;

if ((data = new unsigned char[(size_t)width * height]) == NULL) {
fclose(f);
return MEM_ALLOC_ERROR;
}
if (fread(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
delete data;
fclose(f);
return FILE_READ_ERROR;
}
fclose(f);
return 0;
}

int Image::SaveImage(char* filename)
{
FILE* f;

assert(data);

if ((f = fopen(filename, "wb")) == NULL)
return FILE_OPEN_ERROR;

if (fwrite(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
fclose(f);
return FILE_WRITE_ERROR;
}
fclose(f);
return 0;
}

void Image::Invert()
{
unsigned char* p;

assert (data);

for (p = data; p != data + width * height; p++)
*p = 255 - *p;
}

int main(int argc, char* argv[])
{
Image image;

if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);
return -1;
}
if (image.LoadImage(argv[1]) != 0) {
fprintf(stderr, "Error loading image from file %s\n", argv[1]);
return -1;
}
image.Invert();
if (image.SaveImage(argv[2]) != 0) {
fprintf(stderr, "Error saving image to file %s\n", argv[2]);
return -1;
}
return 0;
}

--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.

Marco Antoniotti

unread,
Dec 20, 1994, 1:11:00 AM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz> nm...@bifrost.otago.ac.nz (Nick Mein) writes:

--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.

20 Minutes. I do not touch type. I tried to write a "multidimensional"
'read-sequence' (a function with that name will be in the upcoming CL
ANSI, the first ANSI OO Language standard), decided it was not worth
the effort, went back to the original version, corrected a few errors
(there are still a few around), checked CLtL2 a couple of times
(actually more) and looked up some documentation strings on my
faithful AKCL.

Pretty good, isn't it? :)

Given that I do not touch type (and I am *really* slow at typing), my
personal speedup is about 4x, give or take.

Moreover, the CL program does more than the C++ program and ILISP
under Emacs makes it very easy to modify one function at a time and
reload and recompile it in the Lisp system without recompiling the
whole file. I did not use this feature now, but it is definitively an
extra speedup for larger projects.

Happy Lisping

Marco Antoniotti

PS. For the record, I am fluent in C, C++ and Ada.

-------------------------------------------------------------------------------
;;; -*- Mode: CLtL -*-

(defconstant +N+ 256)

(defclass image ()
((data :accessor image-data
:initform (make-array (* +N+ +N+)
:initial-element 0
:element-type '(unsigned-byte 8)))))


(defgeneric load-image (image filename))

(defgeneric save-image (image filename))


(defmethod load-image ((im image) (filename string))
(with-open-file (in-stream filename
:direction :input
:element-type '(unsigned-byte 8)
:if-does-not-exist :error)
(read-sequence (image-data im) in-stream (* +N+ +N+))))


(defmethod save-image ((im image) (filename string))
(with-open-file (out-stream filename
:direction :output
:element-type '(unsigned-byte 8)
:if-does-not-exist :create
:if-exists :supersede)
(write-sequence (image-data im) out-stream (* +N+ +N+))))


;;; 'fread' is a C library function that does 'buffered' input. The
;;; upcoming CL ANSI standard will have a counterpart called
;;; 'read-sequence'.
;;; I could have looked up the way AKCL or CMUCL interface with the C
;;; library, but I decided to just write a stupid version of it.

(defun read-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(setf (aref array-data i) (read-byte stream t))))

;;; Same comments.

(defun write-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(write-byte (aref array-data i) stream)))


(defgeneric invert (image))

(defmethod invert ((im image))
(with-slots (data) im
(dotimes (i (* +N+ +N+))
(setf (aref data i) (- 255 (aref data i))))))


(defun test (infile outfile)
(let ((im (make-instance 'image)))
(load-image im infile)
(invert im)
(save-image im outfile)))
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht

Michael Callahan

unread,
Dec 20, 1994, 2:01:26 AM12/20/94
to
Nick Mein (nm...@bifrost.otago.ac.nz) wrote:

: Anyway, I found the claims rather surprising, so here is a challenge. The


: following toy program loads a 256x256 8 bit grayscale image from a file,
: inverts it, and saves it to another file. I consider myself to be an
: intermediate-level C++ programmer. The program took me an hour (almost to
: the minute) to design, code, debug and test (Ok, I didn't test it very much).
: I look forward to seeing the previous posters' Lisp implementations of the
: same. It should only take them 6 - 30 minutes to knock them together, less
: if they are reasonably experienced in Lisp.


I do most of my programming in C and C++, although I would rather be working
in Lisp. I'd never opened a file in Common Lisp before now, so it took
me a few minutes to thumb through my CLtL2 and find the right keywords for
with-open-file. Here's my version of the program (untested):


(defun invert-file (input-file output-file)
(with-open-file (input-stream input-file
:direction :input
:element-type (unsigned-byte 256))
(with-open-file (output-stream output-file
:direction :output
:if-exists :supersede
:element-type (unsigned-byte 256))
;; read and write of the image header would go here

;; This is bad, I should go until the end of the file, not
;; count out how many bytes. It's not as flexible this way.
(dotimes (i (* 256 256))
(write-byte (- 255 (read-byte input-stream)
output-stream))))))


Well, that's it. I don't know if it's a fair comparison, the C++ version
had a lot of junk in it. If I were quickly prototyping in C:
(Note the emphasis on quickly!)


#include <stdio.h>

main (int argc, char **argv)
{
FILE *in = fopen(argv[1]);
FILE *out = fopen(argv[2]);
unsigned char buffer;

while (!feof(in)){
fread(buffer, 1, 1, in);
buffer = 255 - buffer;
fwrite(buffer, 1, 1, out);
}
}


Of course, I wouldn't recommend either one of these as examples in serious
software design. For that, I'd recommend grabbing an existing package,
like ImageMagick for images, and simply extending that to suit your needs.

mike

William Paul Vrotney

unread,
Dec 20, 1994, 3:13:47 AM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz> nm...@bifrost.otago.ac.nz (Nick Mein) writes:

>
> Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
> Vogt have written that programming in Lisp is 2 - 10 time more productive
> than programming in C (two of the posters professed to having little
> experience with C++). Surprisingly, no suggestion is made that this
> productivity advantage is restricted to any particular problem domain (perhaps
> Lisp is 10x more productive for an NLP system, but only 2x more productive
> if you are writing a Unix device driver?).

I don't think that the productivity ratio is related to particular problem
domains as much as it is to the complexity of the problem domains. And so I
would agree with the gist of your last sentence in the above paragraph, but
for different reasons (I wouldn't even guess 2x but more like 1x if that for
the Unix device driver).

Also I should add that I think that it is more than just the complexity of
the problem domain that increases the ratio. Uncertainty is another.
Uncertainty of the algorithm and data representation. This is where
exploratory programming methods are also very helpful. Although complexity
and uncertainty of the problem domain frequently go hand and hand they don't
have to. For example I helped develop an AI scheduling algorithm in Lisp
that took about six months to develop with a lot of intensive exploratory
programming. But the final algorithm turned out to be coded in only about 6
pages of code. Hence the final algorithm was not very complex but the
uncertainty or finding the correct abstractions was the hard part. Now that
I think back at this task, doing the same thing in C++ seems down right
silly. However I could easily recode the algorithm in C++ today in a few
hours.

>
> Anyway, I found the claims rather surprising, so here is a challenge. The
> following toy program loads a 256x256 8 bit grayscale image from a file,
> inverts it, and saves it to another file. I consider myself to be an
> intermediate-level C++ programmer. The program took me an hour (almost to
> the minute) to design, code, debug and test (Ok, I didn't test it very much).
> I look forward to seeing the previous posters' Lisp implementations of the
> same. It should only take them 6 - 30 minutes to knock them together, less
> if they are reasonably experienced in Lisp.
>

> - Negate image program deleted -

In my understanding of the productivity ratio this last paragraph does not
reflect any sort of important proof. Writing a small piece of a BITBLTer is
so well understood and so non-complex that I can not see any advantage to
simply coding this thing in Lisp versus coding it in C++. You only have one
Struct, Image, with two slots and a couple of methods. This hardly
qualifies as a complex program. In fact being a die hard Lisper that I am I
would probably still write this thing in C since it looks like an OS
utility. We already have a very elegant language for writing OS utilities
called C. I wouldn't even waste the time doing this in C++ unless there was
a requirement to provide an object interface.

By the way, I for one lose, it would probably take me more than 30 minutes
from start to finish to write your program in Lisp, so I will not even
start. But, if you really want to demonstrate the effect of the
productivity ratio lets pick a challenge that shows off Lisp and not one
that should obviously be done in C. Here is my challenge:

"You are given spectral sound files representing frequency, amplitude,
bandwidth and time of sounds picked up by hydrophones in the ocean. You are
looking for torpedo sounds but the hydrophones pick up the sounds of the
ship dragging the hydrophones and other miscellaneous noise in the ocean.
Develop an existing standard blackboard system shell and source model and
discover the blackboard level abstractions that allow the top level of the
blackboard to identify a torpedo source specified in the source model.
Hint1: Use the harmonic structure of sound sources for source modeling and
identification. Hint2: You need to consult with human experts on torpedo
sounds and do many experiments with that data to develop the rules."

This was one of my last tasks that I did in approx. 6 months in Lisp. How
long would it take you to do this in C++ given that you do not have our
results? I am very serious about this challenge to make a point but I hope
that you do not start it because I predict that you, assuming you get some
guidance of where to start, will fumble around for the first three months
only to discover that you need another 3 months to reinvent and implement
that portion of Lisp in C++ to give you the interactive visibility you need
to do the experiments. This prediction is based on the AI programs that I
developed in C++ instead of Lisp.

Erik Naggum

unread,
Dec 20, 1994, 5:01:43 AM12/20/94
to
[Nick Mein]

| Anyway, I found the claims rather surprising, so here is a challenge.
| The following toy program loads a 256x256 8 bit grayscale image from a
| file, inverts it, and saves it to another file. I consider myself to
| be an intermediate-level C++ programmer. The program took me an hour
| (almost to the minute) to design, code, debug and test (Ok, I didn't
| test it very much). I look forward to seeing the previous posters'
| Lisp implementations of the same. It should only take them 6 - 30
| minutes to knock them together, less if they are reasonably experienced
| in Lisp.

hmmm.

(defun invert-image (image-path inverted-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :input)
(with-open-file (inverted inverted-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)


(dotimes (i (* 256 256))

(write-byte (- 255 (read-byte image)) inverted)))))

4 minutes 50 seconds, including testing on a file made in by this function

(defun make-image (image-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)
(dotimes (i 256)
(dotimes (j 256)
(write-byte j image)))))

add 2 minutes to read and understand your code.

I am not proficient with file I/O in Common LISP, so had to look up
`with-open-file' to get the arguments right.

the functions are also moderately fast.

* (time (make-image "/tmp/test"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:

Evaluation took:
0.27 seconds of real time
0.26 seconds of user run time
0.01 seconds of system run time
0 page faults and
1344 bytes consed.
NIL
* (time (invert-image "/tmp/test" "/tmp/test.out"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:

Evaluation took:
0.56 seconds of real time
0.54 seconds of user run time
0.02 seconds of system run time
0 page faults and
3944 bytes consed.
NIL

CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.

#<Erik>
--
requiescat in pace: Erik Jarve (1944-1994)

Dave Toland

unread,
Dec 20, 1994, 8:37:17 AM12/20/94
to
I don't believe a single number for comparing the productivity is reasonable.
Good C++ programming really requires a good design phase before coding is
begun, or there will be lots of false starts. This is probably less true
of Lisp.

But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.

In addition to initial implementation, what about maintainability? Which
language is more reliably modified when the requirements evolve?

Opinions?

--
--------------------------------------------------------------------------
All opinions are MINE MINE MINE, and not necessarily anyone else's.
d...@phlan.sw.stratus.com | "Laddie, you'll be needing something to wash
(Dave Toland) | that doon with."

Mike McDonald

unread,
Dec 20, 1994, 10:09:14 AM12/20/94
to
In article <3d5alh$6...@celebrian.otago.ac.nz>, nm...@bifrost.otago.ac.nz (Nick Mein) writes:

|> Anyway, I found the claims rather surprising, so here is a challenge. The
|> following toy program loads a 256x256 8 bit grayscale image from a file,
|> inverts it, and saves it to another file. I consider myself to be an
|> intermediate-level C++ programmer. The program took me an hour (almost to
|> the minute) to design, code, debug and test (Ok, I didn't test it very much).
|> I look forward to seeing the previous posters' Lisp implementations of the
|> same. It should only take them 6 - 30 minutes to knock them together, less
|> if they are reasonably experienced in Lisp.
|>
|>
|> // Toy C++ program that reads 256x256 gray-scale image from a file,
|> // and writes out the "negative" of the image.

If I were to write a C++ version, I'd write this:


#include <stdio.h>

main(int argc, char *argv[])
{

int x, y;
FILE *in, *out;

if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);

exit(-1);
}
in = fopen(argv[1], "r");
if (in == NULL) {
fprintf(stderr, "Cannot open \"%s\" for reading.\n", argv[1]);
exit(-1);
}
out = fopen(argv[2], "w");
if (out == NULL) {
fprintf(stderr, "Cannot open \"%s\" for writing.\n", argv[2]);
exit(-1);
}

for (y = 0 ; y < 256 ; y++)
for (x = 0 ; x < 256 ; x++)
putc(255 - getc(in), out);
fclose(in);
fclose(out);
}

Took all of three minutes to type in and compile. Nick's program illustrates
one of the things about the current batch of C++ programmers that bugs the
daylights out of me. There seems to be this feeling that if your using an "object
oriented" language, EVERYTHING has to be an object. What a total bunch of *&%^$!
The other thing Nick's program illustrates is the tendendcy to over spec
and over build. If all you need is a double for loop, just write a double for
loop!

Mike McDonald m...@titan.ess.harris.com

Ken Anderson

unread,
Dec 20, 1994, 6:01:15 AM12/20/94
to

Dick Gabriel said that Lucid's experience with developing their
"C" tools (Energize), their experience indicated at least a 3 fold
difference between using C and Lisp as development languages - the
increased being in lisp's favour (i.e., productivity was 3 to 1 improved

Here is the quote i've seen. It is definitely worth reading the entire
article, and the entire "Critic at Large" series:

"My exerience with C++ development started with a group just learning C++
and ended with a fairly expert group. At the start I estimated the
productivity advantage of Lisp/CLOS over C/C++ at a factor of 3 to 4; at
the end I set ist at 30%."[p. 91]

R.P. Gabrial, Productivity: Is there a silver bullet?, Journal of Object
Oriented Programming, March-April, 1994, p. 89-92.

if one used lisp). I agree with LGM, this proves nothing, but is simply
some heresay which is contrary to Mr. Trickey's heresay.

--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA

Skip Egdorf

unread,
Dec 20, 1994, 12:46:50 PM12/20/94
to
In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.

I believe that history is against you here. I believe that there is
now general agreement that the major problem with the Lisp Machine
architecture being marketed to the general computer world was that the
Lisp Machine was marketed as the ONLY machine that could handle the
very largest and most complex problems. This was bourne out by the
many examples of Symbolics or Explorers being used for the massive
financial analysis expert systems, telephone-system routing, military
command and control modeling...

The problem was that there were still a lot of "simple" applications
that were being done in languages like C and C++, but the developers
didn't want to have to switch from the Sun/PC/Mac being used for the
"simple" problems to the Lisp Machine for the really hard stuff.
The C systems were better able to handle "Hello World".

These simple problems were often the entry-level prototypes for the
truely massive problems, but by the time the power of the Lisp machine
was actually needed, the problem was so entrenched in C (or whatever)
that the switch to the power tool was not perceived as feasable by
management. These projects might then fail due to the poor choice of
language with no acknowledgement that a different language might
actually have helped.

If the project was in C and failed, the conclusion was "It was too
large. No one could have done it."

If the project was in Lisp and failed, the conclusion was "Lisp is
just not a suitable language."

The number of truely massive projects accomplished on the Lisp
Machines is some evidence that these conclusions are both false, but
It would be interesting to get some sort of survey of "very large
applications" to see if a pattern would emerge favoring some language
or design technology.

Skip Egdorf
h...@lanl.gov

Jim McDonald

unread,
Dec 19, 1994, 5:00:20 PM12/19/94
to

In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
...

|> It is interesting to muse with the powerful software automation of the
|> "software ICs" paradigm, such as NEXTSTEP, in the future perhaps the only
|> interesting programming will be consuming software ICs, producing software
|> ICs and otherwise exploratory programming. Contrary to popular opinion,
|> perhaps Lisp (or Lisp like languages) will be one of the few surviving human
|> written programming languages of the future.

It is interesting to watch features from Lisp creep into the C community.

A decade ago I was saying that the mainstream language after 2000 is
likely to be an evolutionary descendent of C, but will actually be much
closer to lisp implementations of the 80's than to C implementations of
that era. I think the evolution of C++ has tended to support that
contention. (My regret is that there will be a lot of wasted motion in
the process, witness the C++ world reinventing garbage collection.)

Even Bjarne's defense of C++ expressed pleasure at the speed with which
some lispish features had made it into C++, and many of the implementers
of those features were in fact Lisp programmers trying to carry their
vision of a progamming environment into C++.

Lisp has the great advantage of being based on a firm mathematical
foundation. In the long run, the random walks taken by the C community
are likely to arrive at that same maximum in the space of programming
languages.


Christopher J. Vogt

unread,
Dec 19, 1994, 5:53:01 PM12/19/94
to
In article <vrotneyD...@netcom.com>,

William Paul Vrotney <vro...@netcom.com> wrote:
>In article <D0xAI...@rheged.dircon.co.uk> si...@rheged.dircon.co.uk (Simon Brooke) writes:
>
>> In article <BUFF.94De...@pravda.world>,
>> Richard Billington <bu...@pravda.world> wrote:
>> >Dick Gabriel said that Lucid's experience with developing their
>> >"C" tools (Energize), their experience indicated at least a 3 fold
>> >difference between using C and Lisp as development languages - the
>> >increased being in lisp's favour (i.e., productivity was 3 to 1 improved
>> >if one used lisp). I agree with LGM, this proves nothing, but is simply
>> >some heresay which is contrary to Mr. Trickey's heresay.
>>
>> Well, cf Erik Naggum's response to the above, Dick Gabriel in
>> particular and the Lucid team in general must be taken as LisP
>> experts, so that may to some extent weight against their experience as
>> reported above. Nevertheless, if this statement is something Dick has
>> published and provided some quantifiable support for, it would be
>> extremely useful material. There is so *damn* *little* serious study
>> of the comparative productivity of differing programming tools.
>
>It seems like computer science is stuck with surveys here. Is there any
>more scientific approach?

I remember reading a survey years ago when I was at Symbolics, but I
can't find it now.

I think that anything scientific is problematic at best, because you
really need a *big* project, and a large number of people, and how
can that be arranged?

How about somebody sponsoring a programming contest. A team of people
spend a weekend working on a problem, or a set of problems, free to
select what language and what computer they use.

I used to think that Lisp was 10x C (and I have no experience with C++)
but I now believe there is a range between 2x and 5x.

--
Christopher J. Vogt vo...@netcom.com
From: El Eh, CA

Larry Hunter

unread,
Dec 20, 1994, 3:29:15 PM12/20/94
to
Nick Mein posted the following challenge:

Anyway, I found the claims [that lisp is 2-10x more productive than C]


rather surprising, so here is a challenge. The following toy program
loads a 256x256 8 bit grayscale image from a file, inverts it, and saves
it to another file. I consider myself to be an intermediate-level C++
programmer. The program took me an hour (almost to the minute) to design,
code, debug and test (Ok, I didn't test it very much). I look forward to
seeing the previous posters' Lisp implementations of the same. It should
only take them 6 - 30 minutes to knock them together, less if they are
reasonably experienced in Lisp.

I am reasonably experienced in lisp. The following program took me 2
minutes and 49 seconds to design, code, debug and test. (I tested it about
as much as you did -- it runs fine on a single example). Most of that time
was spent looking up read/write-byte in the manual because I wasn't sure it
did exactly what I wanted -- it does.

I guess LISP is more like 20x more productive. And much easier to read,
understand and debug.


;; This reads a file of (height * width) 8 bit values, inverts them, and
;; writes them to another file. It will signal errors in pretty much the same
;; situations that the C program will.

(defun invert (infile outfile &key (height 256) (width 256))
(with-open-file (in infile)
(with-open-file (out outfile :direction :output)
(dotimes (x height)
(dotimes (y width)
(write-byte (- 255 (read-byte in)) out))))))


--
Lawrence Hunter, PhD.
National Library of Medicine
Bldg. 38A, 9th floor
Bethesda. MD 20894 USA
tel: +1 (301) 496-9300
fax: +1 (301) 496-0673
internet: hun...@nlm.nih.gov
encryption: RIPEM via server; PGP via "finger hun...@work.nlm.nih.gov"

Dave Toland

unread,
Dec 20, 1994, 12:54:34 PM12/20/94
to
In article <3d6s2q$h...@jabba.ess.harris.com>, m...@Titan.ESS.Harris.com (Mike McDonald) writes:
> If I were to write a C++ version, I'd write this:

<C (sic) program deleted to save on Spam cans>

> Took all of three minutes to type in and compile. Nick's program illustrates
> one of the things about the current batch of C++ programmers that bugs the
> daylights out of me. There seems to be this feeling that if your using an "object
> oriented" language, EVERYTHING has to be an object. What a total bunch of *&%^$!
> The other thing Nick's program illustrates is the tendendcy to over spec
> and over build. If all you need is a double for loop, just write a double for
> loop!

I can write a C program that looks a lot like BASIC or FORTRAN too. Yes
the C program you wrote is also a C++ program. For that matter, the
original example that Neil submitted could have used iostream constructs
instead of stdio functions, to be "purer".

I don't think anyone would argue against the statement that a small program
in C++ takes more design and planning than one in C, or Lisp. But when
the program gets larger, the pieces tend to fit together with fewer
deleterious interactions than programs written from a procedural
mindset. On the other hand, Lisp takes a functional approach, with
procedural features tacked on to the language after. My question
really comes down to, "Does the functional approach of Lisp scale up
as well as the OO approach of C++ (and other OO languages)."

I'm really talking less about the languages themselves than about the
programming paradigms they are designed to facilitate, although
inevitably, how well the language assists the user in holding to the
paradigm is also relevent.

I think it's also fair to note that the best choice for a given problem
domain may not be the best choice for a different problem domain, even if
the overall size and complexity of the solution is on the same order
of magnitude.

I'd prefer to keep the discussion in the newsgroup, though. I really
don't want to fill my mailbox with "Why my favorite language is the best".
I think though that a good engineer will be able to choose between
different languages for a given project on a reasonably impartial basis,
not that such a choice is always offered. I believe discussions such as
this can help us see where strong and weak points of each language
and design philosophy are, and tehrefore give us better ammunition for
an informed choice.

Stan Friesen

unread,
Dec 20, 1994, 3:14:50 PM12/20/94
to
In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
|>
|> This is close to my experience. I have mentioned that there feels like a 5
|> to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
|> experts in both Lisp and C++ and the usual response is that it is too
|> conservative an estimate. One part of the big ratio is attributed to the
|> lack of a good C++ library. I created a Lispy C++ library for myself which
|> includes lists, dynamic typing and other nice Lispisms, ...

You did what??

This sounds like you are trying to program C++ as if it were Lisp.

This is certainly NOT going to get the most effective use out of C++.

Effective C++ programming style requires a different approach than
Lisp programming - the best solution of a given problem is often
different in the two languages.

[This is probably even *more* true of Lisp vs. C++ than it is of
Smalltalk vs. C++ - and Smalltalkish C++ code is very kludgy C++].

|> This leads me to believe that for a complex enough application, like a Go
|> program, it is better to develop it in Lisp then recode in C++ in the last
|> two weeks before delivery. Or better yet compile efficiently (enough) in
|> Lisp when that becomes more feasible.

Or it is better designed up front.

"exploratory programming" is *one* way to handle complexity.
Formal design is another way to handle complexity.

--
s...@elsegundoca.attgis.com sar...@netcom.com

The peace of God be with you.

Stan Friesen

unread,
Dec 20, 1994, 3:26:10 PM12/20/94
to
In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
|> that should obviously be done in C. Here is my challenge:
|>
|> "You are given spectral sound files representing frequency, amplitude,
|> bandwidth and time of sounds picked up by hydrophones in the ocean. You are
|> looking for torpedo sounds but the hydrophones pick up the sounds of the
|> ship dragging the hydrophones and other miscellaneous noise in the ocean.
|> Develop an existing standard blackboard system shell and source model and
|> discover the blackboard level abstractions that allow the top level of the
|> blackboard to identify a torpedo source specified in the source model.
|> Hint1: Use the harmonic structure of sound sources for source modeling and
|> identification. Hint2: You need to consult with human experts on torpedo
|> sounds and do many experiments with that data to develop the rules."

This is, in some ways, both overspecified and underspecified.

It contains in the problem definition some implementation details
(for instance is the "blackboard" paradigm really required by the
problem to be solved, or is it merely the traditional way to
handle this sort of problem in Lisp?).

On the other hand, a full specification needs to at least *refer*
to a spectral decomposition of torpedo nose. Or, to put it another
way - I would follow Hint2 *before* I even started designing, let alone
coding, and would consider that time to be independent of the programming
effort, and therefore independent of the language I was using to implement.
Without the results of Hint2, I consider the problem spec incomplete.

Now, I can see that it might take months to gather the data required,
but I do not see that much programming is really necessary to do so,
just sound experimental design, and a decent mathematical model.
[Oh, I might use Macsyma or some such thing to manipulate the
mathematical model while tuning it - but that is not really
necessary to the process].

[P.S. are you sure that project isn't classified?].

David Hanley

unread,
Dec 20, 1994, 9:14:20 PM12/20/94
to
Nick Mein (nm...@bifrost.otago.ac.nz)

Ok program, just a few complaints:

: #define MEM_ALLOC_ERROR -1


: #define FILE_OPEN_ERROR -2
: #define FILE_READ_ERROR -3
: #define FILE_WRITE_ERROR -4

Why not make these const ints?

: const int N = 256;

N is pretty un-descriptive

: void Image::Invert()
: {
: unsigned char* p;

: assert (data);

: for (p = data; p != data + width * height; p++)
: *p = 255 - *p;

: }

This could be written *much* faster, to wit:
int *p, e = ( int * ) ( data + width * height );
assert( p != NULL ); //Much nicer this way.
for( p = (int*) data ; p < e ; ++p)
*p ^= 0xffffffff;

There's a fwe bogeys here, but this should be at least 4 times
faster than the version you gave, on a 32-bit machine.

--
------------------------------------------------------------------------------
| David James Hanley -- dha...@lac.eecs.uic.edu -- C++, OOD, martial arts |
| Laboratory for advanced computing | My employer barely KNOWS me. |
------------------------------------------------------------------------------
"There are trivial truths & there are great truths. The opposite of a
trivial truth is plainly false."
"The opposite of a great truth is also true."

-Neils Bohr

William Paul Vrotney

unread,
Dec 21, 1994, 12:07:06 AM12/21/94
to

> In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
> |>
> |> This is close to my experience. I have mentioned that there feels like a 5
> |> to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
> |> experts in both Lisp and C++ and the usual response is that it is too
> |> conservative an estimate. One part of the big ratio is attributed to the
> |> lack of a good C++ library. I created a Lispy C++ library for myself which
> |> includes lists, dynamic typing and other nice Lispisms, ...
>
> You did what??
>
> This sounds like you are trying to program C++ as if it were Lisp.

No, I just want to have some of the library capability in C++ that I have in
lisp.

>
> This is certainly NOT going to get the most effective use out of C++.

Who cares? My main concern is to get the program working, not half working
with lots of gotchas and memory leaks. I've seen too many C++ programs in
this state. I'm wide open to suggestions as how to get the most effective
use out of C++.

>
> Effective C++ programming style requires a different approach than
> Lisp programming - the best solution of a given problem is often
> different in the two languages.
>

I think that I've tried this "C++ programming style" that you speak of and I
don't like it. I've read some of the C++ books. If I had to characterize
it I would say that this style (if I'm hearing you) is too static, and I
have to go way out of my way to write extraneous code to support this static
style, which introduces all sorts bugs and obscurities. That's why I've
changed to a more Lispy programming style. I find that it is quite
effective.

>
> |> This leads me to believe that for a complex enough application, like a Go
> |> program, it is better to develop it in Lisp then recode in C++ in the last
> |> two weeks before delivery. Or better yet compile efficiently (enough) in
> |> Lisp when that becomes more feasible.
>
> Or it is better designed up front.
>
> "exploratory programming" is *one* way to handle complexity.
> Formal design is another way to handle complexity.
>

Well, I'm telling you from experience that the Formal Design approach for
the kinds of problem that I mention is going to fail you. You can sit in a
room and formally design all you want but you are not going to get anywhere
until you get some visualization of your data world.

Nick Mein

unread,
Dec 21, 1994, 1:51:39 AM12/21/94
to

Several people (Michael Callahan, Erik Naggum, Mike McDonald, Larry Hunter)
seemed to miss the point of my challenge. Of course I could have
written the toy program in a few lines of C. I tried to come up with a problem
that C++ seemed well suited to, but that did not take hours or pages
of code to solve. A real Image class would have other operations
defined than load, invert and save; and would be part of a much
larger program (and ideally reusable, so that it could be used in many such
programs).

From: mar...@mosaic.nyu.edu (Marco Antoniotti)

> 20 Minutes. I do not touch type. I tried to write a "multidimensional"
> 'read-sequence' (a function with that name will be in the upcoming CL
> ANSI, the first ANSI OO Language standard), decided it was not worth
> the effort, went back to the original version, corrected a few errors
> (there are still a few around), checked CLtL2 a couple of times
> (actually more) and looked up some documentation strings on my
> faithful AKCL.

> Pretty good, isn't it? :)

Yes, I'm impressed. Although I did have to change the names of functions
load-image and save-image to get your program to run (on a Harlequin
LispWorks system). Portability problems in the Lisp world?

> Moreover, the CL program does more than the C++ program

Could you be more specific? I don't read Lisp well enough to be able to
detect the extra functionality. Actually, I think that you have supplied
somewhat less than I have. Your image class (which, admittedly, is the
significant part of the example) is fine, but your test function does not
perform the command line parsing and handling of errors performed
by main.

Also (maybe significantly) there does not appear to be the equivalent
of my "assert(data)" - an important check that some client of the class
will not get away with doing something stupid - like manipulating an image
that hasn't been loaded yet!


From: call...@xmission.com (Michael Callahan)

> I do most of my programming in C and C++, although I would rather be working
> in Lisp.

> the C++ version had a lot of junk in it.

Thanks for the insightful and informative comment.

> main (int argc, char **argv)
> {
> FILE *in = fopen(argv[1]);
> FILE *out = fopen(argv[2]);
> unsigned char buffer;
>
> while (!feof(in)){
> fread(buffer, 1, 1, in);
> buffer = 255 - buffer;
> fwrite(buffer, 1, 1, out);
> }
> }

Oh, I see. Checking the number of command line arguments, checking
the return value of system calls, returning a meaningful value to
the system is a lot of junk.

Making those 65536 calls to fread and fwrite was pretty smart - should
really do wonders for performance.

I'd prefer it if you were working in Lisp as well.


From: vro...@netcom.com (William Paul Vrotney)

>> Lisp is 10x more productive for an NLP system, but only 2x more productive
>> if you are writing a Unix device driver?).

> I don't think that the productivity ratio is related to particular problem


> domains as much as it is to the complexity of the problem domains. And so I
> would agree with the gist of your last sentence in the above paragraph, but
> for different reasons (I wouldn't even guess 2x but more like 1x if that for
> the Unix device driver).

Really? It's always scary when you try to be sarcastic and somebody takes
you seriously.

> Also I should add that I think that it is more than just the complexity of
> the problem domain that increases the ratio. Uncertainty is another.
> Uncertainty of the algorithm and data representation. This is where
> exploratory programming methods are also very helpful.

I am not prepared to get into a debate on "exploratory programming" at the
moment (is this a technical term with its own literature?). Certainly
no one claims that C++ is suitable for all programming tasks. What is
at issue is whether or not Lisp is more suitable than C++ for (almost)
all tasks.

> Writing a small piece of a BITBLTer is
> so well understood and so non-complex that I can not see any advantage to
> simply coding this thing in Lisp versus coding it in C++. You only have one
> Struct, Image, with two slots and a couple of methods. This hardly
> qualifies as a complex program.

Ah, but complex systems can be decomposed into simpler components. See
Booch for an extensive discussion of this issue.

> By the way, I for one lose, it would probably take me more than 30 minutes
> from start to finish to write your program in Lisp, so I will not even
> start. But, if you really want to demonstrate the effect of the
> productivity ratio lets pick a challenge that shows off Lisp and not one

> that should obviously be done in C.

No. To make the challenge convincing you would have to pick a challenge that
shows off C++, and demonstate that it was _still_ more productive to use
Lisp than to use C++.


From: Erik Naggum <er...@naggum.no>

> the functions are also moderately fast.

> * (time (make-image "/tmp/test"))
> Compiling LAMBDA NIL:
> Compiling Top-Level Form:

> Evaluation took:
> 0.27 seconds of real time
> 0.26 seconds of user run time
> 0.01 seconds of system run time
> 0 page faults and
> 1344 bytes consed.
> NIL
> * (time (invert-image "/tmp/test" "/tmp/test.out"))
> Compiling LAMBDA NIL:
> Compiling Top-Level Form:

> Evaluation took:
> 0.56 seconds of real time
> 0.54 seconds of user run time
> 0.02 seconds of system run time
> 0 page faults and
> 3944 bytes consed.
> NIL

> CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.

Depends what you mean by fast, I suppose.

time invert test.img out.img
0.0u 0.0s 0:00 100% 57+253k 0+12io 0pf+0w

on a DECStation 5000.

From: d...@sw.stratus.com (Dave Toland)

> For that matter, the original example that Neil submitted could have used
> iostream constructs instead of stdio functions, to be "purer".


It is actually Nick, not Neil. Yes, I've been meaning to look up iostreams
one day.

> I don't think anyone would argue against the statement that a small program
> in C++ takes more design and planning than one in C, or Lisp. But when
> the program gets larger, the pieces tend to fit together with fewer
> deleterious interactions than programs written from a procedural
> mindset. On the other hand, Lisp takes a functional approach, with
> procedural features tacked on to the language after. My question
> really comes down to, "Does the functional approach of Lisp scale up
> as well as the OO approach of C++ (and other OO languages)."

I largely agree with this post, but I imagine that when people have been
referring to Lisp they mean that CLOS, which is also object-oriented.


From: mcdo...@kestrel.edu (Jim McDonald)

> There are at leas a couple of problems with this challenge.

> The main problem is that you have given an overly specified problem.

I was hoping that people would (if they took the challenge seriously at all)
look at the spirit of the challenge without me laying down the rules
explicitly. Marco Antoniotti did so, and so did you with your second & third
versions. Thanks.

> The second problem is that the task you chose to measure is very low
> level and doesn't really use any interesting features of a language--
> it's almost nothing more than a few calls to the OS.

But my example does use an interesting feature of C++ - encapsulation.

> I'll do three versions:
> one very simple and close to your example, a second slightly
> more abstract, and a third closer to the problem I've given
> above.

> SECOND VERSION
> I used parameterized types and sizes, 2D arrays, and added an
> explicit test routine.

This sounds a bit closer to my version. Adding a header to the image
file didn't seem to add anything interesting to the problem.

> This time the code was written in a
> file (instead of at the lisp listener as above) and compiled.
> I compiled it a second time with the optimizing compiler but
> the test showed very little timing improvement in this case.

> > (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")
> Elapsed Real Time = 5.40 seconds
> Total Run Time = 5.33 seconds
> User Run Time = 5.27 seconds
> System Run Time = 0.06 seconds
> Process Page Faults = 12
> Dynamic Bytes Consed = 131,088
> Ephemeral Bytes Consed = 5,184
> Elapsed Real Time = 5.14 seconds
> Total Run Time = 5.09 seconds
> User Run Time = 5.06 seconds
> System Run Time = 0.03 seconds
> Process Page Faults = 12
> Dynamic Bytes Consed = 131,088
> Ephemeral Bytes Consed = 5,168

The run times speak for themselves.

Scott McLoughlin

unread,
Dec 20, 1994, 8:06:19 PM12/20/94
to
d...@sw.stratus.com (Dave Toland) writes:

> procedural features tacked on to the language after. My question
> really comes down to, "Does the functional approach of Lisp scale up
> as well as the OO approach of C++ (and other OO languages)."
>

Howdy,
Good question. I'd answer with a resounding yes. Common Lisp
includes CLOS, which "scales up" extremely well. See the papers in
"Object Oriented Programming: The CLOS Perspective" for some wild
and nifty examples of "scaling up" (a Good Read, IMHO, in any case).
In general, though, the Lisp family languages include
constructs that scale up to most problems, whether or not the language
includes native OOP constructs. Problem seems to be scaling
down ;-) This is where C/Pascal/C++ really shine.

=============================================
Scott McLoughlin
Conscious Computing
=============================================

David Hanley

unread,
Dec 21, 1994, 8:04:02 AM12/21/94
to
David Hanley (dhanley@picasso)

In fact, even better:

void Reverse( char *f1 , char *f2 )
{
FILE *in = fopen( f1 , "r" );
FILE *out = fopen( f2 , "w" );

int i;
while( fread( &i , 4 , 1 , in ) == 1 )
{
i ^= 0xffffffff;
fwrite( &i , 4 , 1 , out );
}
fclose( in );
fclose( out );
}

~2 minutes.

Curt Eggemeyer

unread,
Dec 21, 1994, 8:56:36 AM12/21/94
to
In article <EGDORF.94D...@zaphod.lanl.gov> egd...@zaphod.lanl.gov (Skip Egdorf) writes:
>In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:
>
> But which language scales better to *large* problems? Maybe Lisp gurus will
> disagree with me, but I believe that C++ is better suited to very large
> applications.
>
>I believe that history is against you here. <rest removed>


Here at the lab, high-level complex problems are usually prototyped in LISP and
then later ported to C or C++. A good example of this was JSC's (Johnson
Space Center) development of CLIPS derived from Inference's ART (forward-
chainer rule-base system).

One of the C-world's problem is that the language's implementation gets in
the way of the representation of the problem. It seems that many developments
tend to "pin" down how to represent the problem in LISP first, because of its
"natural" extensibility and easy way of dealing with the abstract without the
need to memory manage. Once these principals are defined, you can then port
over to the static language environments. Of course, many of the LISP apps
could also be optimized and I would argue in the end you could probably get
"speedwise" within 10% of C/C++.


Lawrence G. Mayka

unread,
Dec 21, 1994, 10:25:05 AM12/21/94
to
In article <3d75oq$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

I don't think anyone would argue against the statement that a small program
in C++ takes more design and planning than one in C, or Lisp. But when
the program gets larger, the pieces tend to fit together with fewer
deleterious interactions than programs written from a procedural
mindset. On the other hand, Lisp takes a functional approach, with
procedural features tacked on to the language after. My question
really comes down to, "Does the functional approach of Lisp scale up
as well as the OO approach of C++ (and other OO languages)."

Let me bring you up to date: Flavors, a first-generation
object-oriented programming system that included multiple inheritance
and method combination, was added to MIT Lisp around 1979, began
selling commercially in 1981, and was subsequently adopted as a de
facto standard by major Common Lisp vendors. CLOS, a
second-generation OOP system that adds multiple-argument dispatch
capability, was voted into the draft ANSI standard for Common Lisp in
1988 following an experimentation period, and was subsequently folded
into all major commercial and free-of-charge implementations,
replacing Flavors. ANSI should be proclaiming the Common Lisp
standard officially within a few weeks from now.

In ANSI Common Lisp, every information element is an object, belonging
to a class. Classes form an inheritance lattice, and can be
meta-classified as built-in (not further inheritable), structured
(extendible via single inheritance), and standard (extendible via
multiple inheritance). Methods can specialize on any class, including
built-in classes, as well as on individual objects. Methods can
specialize on any or all of their required positional arguments, not
just the first. Most major Common Lisp implementations also support a
meta-object protocol which permits the extension of the OOP system
itself.

In short, the Common Lisp community actually has more experience with
the OOP paradigm than perhaps anyone else.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com

Standard disclaimer.

Lawrence G. Mayka

unread,
Dec 21, 1994, 10:01:32 AM12/21/94
to
In article <3d6mmd$r...@transfer.stratus.com> d...@sw.stratus.com (Dave Toland) writes:

In addition to initial implementation, what about maintainability? Which
language is more reliably modified when the requirements evolve?

I certainly agree that =evolvability= is perhaps the most important
characteristic of a large application nowadays. No one can afford to
build a stone-monolith application that crumbles as soon as the
requirements change. Note that this pertains not only to application
maintenance, but to initial implementation as well, insofar as
requirements can and do change even during the initial architecture,
design, and implementation. A company that refuses to recognize this
reality will repeatedly build products that are obsolete (from the
customer's viewpoint) before they leave the shop.

Similarly, evolvability is probably the most important characteristic
of a programming language as well. A programming language, together
with its implementations, libraries, development environments, and
associated infrastructure, is itself a very large application of
computing technology to meet customer needs. As these needs change,
so must the language, its implementations, libraries, and
environments. Once again, a language that refuses to recognize this
reality will be obsolete before it even achieves standardization.

Perhaps those with significant experience in both CLOS and C++ can
comment on the ability of those languages, and the large applications
built with them, to evolve smoothly and rapidly to meet customer
needs.

Jim McDonald

unread,
Dec 20, 1994, 10:29:01 PM12/20/94
to

Note: My reply is rather long because I've included transcripts for three
sessions, including one testing a program that is substantially more
capable than Nick requested.
The functions and files alluded to here are available on request
(not that I think they're worth much) to avoid making this post
impossibly long.

In article <3d5alh$6...@celebrian.otago.ac.nz>, nm...@bifrost.otago.ac.nz (Nick Mein) writes:
|>

|> Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
|> Vogt have written that programming in Lisp is 2 - 10 time more productive
|> than programming in C (two of the posters professed to having little
|> experience with C++). Surprisingly, no suggestion is made that this
|> productivity advantage is restricted to any particular problem domain (perhaps
|> Lisp is 10x more productive for an NLP system, but only 2x more productive
|> if you are writing a Unix device driver?).
|>
|> Anyway, I found the claims rather surprising, so here is a challenge. The
|> following toy program loads a 256x256 8 bit grayscale image from a file,
|> inverts it, and saves it to another file. I consider myself to be an
|> intermediate-level C++ programmer. The program took me an hour (almost to
|> the minute) to design, code, debug and test (Ok, I didn't test it very much).
|> I look forward to seeing the previous posters' Lisp implementations of the
|> same. It should only take them 6 - 30 minutes to knock them together, less
|> if they are reasonably experienced in Lisp.
|>

[text of toy C++ prgram deleted]

There are at leas a couple of problems with this challenge.

The main problem is that you have given an overly specified problem.

Better would be to say something like this:

Read an N dimensional image from a file where the byte size and
dimensions are given by data in the file.

Perform a series of operations on that image.

Save the result to another file.

For simple test cases, use one inversion to see that a different
file results, and two inversions to get a fixpoint.

The second problem is that the task you chose to measure is very low
level and doesn't really use any interesting features of a language--
it's almost nothing more than a few calls to the OS.

But, lest I be blamed for whining, I'll do three versions:


one very simple and close to your example, a second slightly
more abstract, and a third closer to the problem I've given
above.

TRANSCRIPT FOR SIMPLE VERSION
Total elapsed time: 4 minutes, 8 seconds to write and (crudely) test.

.sa_394. date
Tue Dec 20 16:36:30 PST 1994
.sa_395. ./bin/lisp
[startup messages elided...]

> (defun invert-image (in-file out-file)
(with-open-file (in-stream in-file :element-type '(unsigned-byte 8))
(with-open-file (out-stream out-file :element-type '(unsigned-byte 8)
:direction :output)


(dotimes (i 256)
(dotimes (j 256)

(write-byte (- 255 (read-byte in-stream)) out-stream))))))
INVERT-IMAGE
> (compile *)
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
INVERT-IMAGE
> (with-open-file (s "/tmp/hack" :element-type '(unsigned-byte 8) :direction :output)
(dotimes (i 256) (dotimes (j 256) (write-byte 11 s))))
NIL
> (time (invert-image "/tmp/hack" "/tmp/hack2"))
Elapsed Real Time = 4.85 seconds
Total Run Time = 4.78 seconds
User Run Time = 4.77 seconds
System Run Time = 0.01 seconds
Process Page Faults = 47
Dynamic Bytes Consed = 0
Ephemeral Bytes Consed = 3,472
NIL
> (time (invert-image "/tmp/hack2" "/tmp/hack3"))
Elapsed Real Time = 5.03 seconds
Total Run Time = 4.79 seconds
User Run Time = 4.77 seconds
System Run Time = 0.02 seconds
Process Page Faults = 18
Dynamic Bytes Consed = 0
Ephemeral Bytes Consed = 3,424
NIL
> (quit)
.sa_396. diff /tmp/hack /tmp/hack3
.sa_397. date
Tue Dec 20 16:40:38 PST 1994
.sa_398.

SECOND VERSION
I used parameterized types and sizes, 2D arrays, and added an

explicit test routine. This time the code was written in a


file (instead of at the lisp listener as above) and compiled.
I compiled it a second time with the optimizing compiler but
the test showed very little timing improvement in this case.

Note that the test uses /bin/diff to compare the resulting files.
Total elapsed time: 9 minutes, 26 seconds.

.sa_399. date
Tue Dec 20 16:40:39 PST 1994
.sa_400. ./bin/lisp
[startup messages elided...]
[preparation of hack.lisp not shown here]
> (load (compile-file "~/hack.lisp"))
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
;;; Reading source file "/usr/home/kestrel/mcdonald/hack.lisp"
;;; Writing binary file "/usr/home/kestrel/mcdonald/hack.sbin"
;;; Loading binary file "/usr/home/kestrel/mcdonald/hack.sbin"
#P"/usr/home/kestrel/mcdonald/hack.sbin"
> (prepare-test "/tmp/hack1")
NIL


> (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")

Elapsed Real Time = 6.18 seconds
Total Run Time = 6.11 seconds
User Run Time = 6.05 seconds


System Run Time = 0.06 seconds

Process Page Faults = 19


Dynamic Bytes Consed = 131,088

Ephemeral Bytes Consed = 5,216
Elapsed Real Time = 5.97 seconds
Total Run Time = 5.92 seconds
User Run Time = 5.88 seconds
System Run Time = 0.04 seconds
Process Page Faults = 18


Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,168


Diff1:

(NIL NIL 0 NIL)

Diff2:
Binary files /tmp/hack1 and /tmp/hack2 differ

(NIL NIL 1 NIL)
NIL
> (proclaim '(optimize (speed 3) (safety 0) (compilation-speed 0)))
T
> (load (compile-file "~/hack.lisp"))
;;; You are using the compiler in PRODUCTION mode (compilation-speed = 0)
;;; If you want shorter compile time at the expense of reduced optimization,
;;; you should use the development mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 3)))
;;; Generation of runtime error checking code is disabled (safety = 0)
;;; Optimization of tail calls is enabled (speed = 3)
;;; Reading source file "/usr/home/kestrel/mcdonald/hack.lisp"
;;; Writing binary file "/usr/home/kestrel/mcdonald/hack.sbin"
;;; Loading binary file "/usr/home/kestrel/mcdonald/hack.sbin"
#P"/usr/home/kestrel/mcdonald/hack.sbin"


> (test "/tmp/hack1" "/tmp/hack2" "/tmp/hack3")
Elapsed Real Time = 5.40 seconds
Total Run Time = 5.33 seconds
User Run Time = 5.27 seconds
System Run Time = 0.06 seconds
Process Page Faults = 12
Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,184
Elapsed Real Time = 5.14 seconds
Total Run Time = 5.09 seconds
User Run Time = 5.06 seconds
System Run Time = 0.03 seconds
Process Page Faults = 12
Dynamic Bytes Consed = 131,088
Ephemeral Bytes Consed = 5,168


Diff1:

(NIL NIL 0 NIL)

Diff2:
Binary files /tmp/hack1 and /tmp/hack2 differ

(NIL NIL 1 NIL)
NIL
> (quit)
.sa_401. date
Tue Dec 20 16:50:05 PST 1994

THIRD VERSION (Timing is problematic since I had to deal with many
other things [like work!] interleaved with this -- estimate is 30
minutes during an elapsed time of about 90 minutes.)

This version has a file format that lets you specify the byte size
and the dimensions of the image (e.g. 256*256, 100*100*8, etc.).
There is a reasonably general mapping function that lets you
apply a sequence of functions to the locations in one image to
pointwise produce a new image.

The test scenario for this version is
(1) Make an image file of size 256 * 256 * 4 with 16-bit bytes,
initialized with 1's for all 262,144 elements.
Note that the image dimensions and byte size are parameters
passed to the routines to be tested.
(2) Read that image file.
(3) Create a partial inversion of the image by inverting all the
elements in the first plane (i.e., at positions whose third
index is 0), but keep the 3 other planes intact.
(4) Write the result back out to a second file.
(5) Repeat 2,3,4 starting with the second file, to get an image
in a third file that is a partial inversion of the second file.
This file should be equivalent to the first file.
(6) Repeat 2,3,4 but for step 3 use the inversion function twice
before writing the fourth file, which also should be
equivalent to the first file.

It should also take less than a minute to prepare and start other
tests (e.g., compute a new value at x,y that averages the 9 values
found at the locations around x,y, or compute the gradient at each
point from the old image and store it in planes 2 and 3, etc),
plus maybe a minute to run them.

So the challenge back to you is to provide a C++ version with
that functionality. Can you do it in less than an hour?

[preparation of hack3.lisp occurred bere]
.sa_411. date
Tue Dec 20 18:20:43 PST 1994
.sa_412. ./bin/lisp

lisp startup messages elided...

> (load (compile-file "hack3.lisp"))
;;; You are using the compiler in DEVELOPMENT mode (compilation-speed = 3)
;;; If you want faster code at the expense of longer compile time,
;;; you should use the production mode of the compiler, which can be obtained
;;; by evaluating (proclaim '(optimize (compilation-speed 0)))
;;; Generation of full safety checking code is enabled (safety = 3)
;;; Optimization of tail calls is disabled (speed = 2)
;;; Reading source file "hack3.lisp"
;;; Writing binary file "hack3.sbin"
;;; Loading binary file "hack3.sbin"
#P"/usr/home/kestrel/mcdonald/hack3.sbin"

> (time (prepare-test "/tmp/jjj.xxx" '(unsigned-byte 16) '(256 256 4)))
Elapsed Real Time = 18.83 seconds
Total Run Time = 18.46 seconds
User Run Time = 18.19 seconds
System Run Time = 0.27 seconds
Process Page Faults = 81
Dynamic Bytes Consed = 524,296
Ephemeral Bytes Consed = 11,280,720
There were 21 ephemeral GCs
NIL

> (test "/tmp/jjj.xxx" "/tmp/jjj.two" "/tmp/jjj.three" "/tmp/jjj.four")

[copying jjj.xxx to jjj.two, inverting first plane]
Elapsed Real Time = 57.87 seconds
Total Run Time = 56.91 seconds
User Run Time = 56.45 seconds
System Run Time = 0.46 seconds
Process Page Faults = 70
Dynamic Bytes Consed = 1,048,712
Ephemeral Bytes Consed = 41,960,424
There were 80 ephemeral GCs

[copying jjj.two to jjj.three inverting first plane]
Elapsed Real Time = 59.70 seconds
Total Run Time = 57.48 seconds
User Run Time = 57.11 seconds
System Run Time = 0.37 seconds
Process Page Faults = 117
Dynamic Bytes Consed = 1,082,288
Ephemeral Bytes Consed = 41,960,624
There were 2 dynamic GCs
There were 81 ephemeral GCs

[copying jjj.one to jjj.four inverting inversion of first plane]
Elapsed Real Time = 81.40 seconds (1 minute, 21.40 seconds)
Total Run Time = 78.81 seconds (1 minute, 18.81 seconds)
User Run Time = 77.85 seconds (1 minute, 17.85 seconds)
System Run Time = 0.96 seconds
Process Page Faults = 68
Dynamic Bytes Consed = 2,104,904
Ephemeral Bytes Consed = 61,626,624
There were 2 dynamic GCs
There were 118 ephemeral GCs


Diff 1 2 -- File 2 is inversion of file 1
Binary files /tmp/jjj.xxx and /tmp/jjj.two differ

(NIL NIL 1 NIL)

Diff 1 3 -- File 3 is inversion of file 2

(NIL NIL 0 NIL)

Diff 1 4 -- File 4 is inversion of inversion of file 1

(NIL NIL 0 NIL)
NIL
> (quit)
.sa_413. date
Tue Dec 20 18:27:36 PST 1994

Lyman S. Taylor

unread,
Dec 21, 1994, 2:16:52 PM12/21/94
to
In article <3d8j9r$c...@celebrian.otago.ac.nz>,

Nick Mein <nm...@bifrost.otago.ac.nz> wrote:
>
>The run times speak for themselves.
>--

Point 1.

Ahem, could we *PLEASE* stick with the initial challenge. Which was to
measure PROGRAMMER PRODUCTIVITY! NOT I can write C code that executes faster
than Lisp code.

The MAJOR speed difference between these programers is that C's std library
has routines that can read/write blocks of bytes at a
time and Lisp's std. library doesn't. Big deal! Any decent Lisp vendor
could add that into a lisp implementation in a small amount of time.

However, as was expressed ( several people saying... "I don't normally do
file I/O in Lisp ... I had to look it up" ), doing file I/O is not something
most Lisp users preoccupy themselves with. Therefore, I don't expect very
many vendors to run out and add this "feature".

This is a difference in standard Libraries and has nothing to do with
programmer productivity unless that productivity is in a domain focused upon
by the standard Libraries. Picking tasks which lend themselves to the
strength of either language's standard Libraries does not provide us
with very much enlightenment.


Point 2.

The reason there is no "assert(data)" in the lisp code is that
that slot is automatically initialized when a new object is created.
( see the part in the slot declaration that says "initform". )

Point 3.

There is no command line testing in the lisp code because most lisp programs
are not invoked from the command line! There is no requirement that there
be some function "main" in a Lisp program.

How does the requirement that there be a function "main" increase programmer
productivity? In fact, "main" is really just a bothersome artifact in
this challenge since it is the "image class and its methods" that we are
trying to develop and test. This can be done in
'C' too.... using CodeCenter/ObjectCenter you could have skipped the "main"
junk and just written the 'C'/'C++' code.


>Although I did have to change the names offunctions
>load-image and save-image to get your program to run (on a Harlequin
>LispWorks system). Portability problems in the Lisp world?

Point 4.

Not really. However, the word "image" does have connotations in the "default"
Lisp world. Although these two function as not part of the ANSI standard
I think you'll find them in almost every vendor's implementation.
If the original class had used the slightly more descriptive
class name of 'bitmap' then perhaps the name clash wouldn't have occurred. ;-)


>I am not prepared to get into a debate on "exploratory programming" at the
>moment (is this a technical term with its own literature?). Certainly

Point 5.

In some sense "exploratory programming" == "rapid prototyping". You have
heard of the latter haven't you.

"Exploratory Programming" doesn't mean that you willy-nilly sit down at the
terminal and write you program. It means you have a not-so-flushed-out
design and you want to "flush out" the design by trying to implement
the not so well understood and/or novel parts of the design.

The model of development of "formally specify" then build ( namely the
"Waterfall" model software engineering) is clearly a "bust" in the
real world. It has not promoted any dramatic increases in programmer
productivity ( except in those projects that are "very well understood" ).
[ Go over software eng newsgroups and announce that the "Waterfall"
model is the "one true way and these new dynamic approaches to
development are bunk" and what the response. ]

That's because many projects that people what folks to code for aren't
very well understood. Which means it will be virtually impossible
to write a complete/accurate specification that will not have to grow
in an evolutionary fashion.... hence "exploratory programming".

What might be interesting for folks here to argue is how much of a
contribution the language versus the development environment gives to
"exploratory environment". For Common Lisp the environment and
Language have tended to "blur" together.

The impact on programming productivity that "exploratory programming"
environments has seem to be very relevent to the initial topic being discussed.

Well that seems to be a long enough post for today... ;-)


--

Lyman S. Taylor Comment by a professor observing two students
(ly...@cc.gatech.edu) unconscious at their keyboards:
"That's the trouble with graduate students.
Every couple of days, they fall asleep."

John Nagle

unread,
Dec 21, 1994, 7:24:51 PM12/21/94
to
s...@elsegundoca.ncr.com (Stan Friesen) writes:
>In article <vrotneyD...@netcom.com>, vro...@netcom.com (William Paul Vrotney) writes:
>|> >
>|> > This is certainly NOT going to get the most effective use out of C++.
>|>
>|> Who cares? My main concern is to get the program working, not half working
>|> with lots of gotchas and memory leaks. I've seen too many C++ programs in
>|> this state. I'm wide open to suggestions as how to get the most effective
>|> use out of C++.

>Well, start with using a C++ style library, and then design your
>solution to make effective use of that library.

>Since it is being included in the standard, the STL is a good choice.
>This is a very general library with a large number of different containers,
>iterators, and many, many algorithms *using* said containers.

Is STL really going in? It isn't in Plauger's book of the
draft C++ Standard Library.

John Nagle