0: It orphans circular structures.
Well, if you do it naively, sure. But the way I've always handled such
memory is that there's a top structure (head pointer, usually) that owns
everything inside. Everything outside just borrows memory. Anything that
includes circular pointers is complex enough that it belongs in such an
encapsulation. Obviously you would count references to the head, not to
every member.
1: It's slow, due to count update overhead.
Again, I get the impression that what one has in mind is a very naive
implementation. Say, one that "thrashes" every time there's a loop that
copies and destroys a pointer. I have even heard people who consider
that finding the length of a list in Lisp inc/dec's for _every step_!
The purpose of reference counting is to handle "sibling" deep pointers
where you don't know which one will get destroyed first. Reference
counting anything whose order of destruction you can predict at compile
time seems pointless.
Tom
--
t...@world.std.com
TomB...@delphi.com: Author of The Burning Tower
Good ideas. Now can you formalize these rules & prove conditions
under which any & all garbage will be collected? I'm trying to
understand _provably safe styles_ of programming.
Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hbaker/home-page.html
I'm not trying to be dense. What does 'own' mean? What does 'inside'
mean? How is ownership transferred and/or controlled? What happens
in the presence of exceptions? These are the kinds of things that
need to be made precise in order to convert good ideas into a robust,
workable _style_ of programming.
Right, of course not. I ftp'ed a paper of yours yesterday, you're
obviously not dense.
> What does 'own' mean?
Like in real life, an exclusive association. If A owns something, B, C,
and D et. al. do not own it.
Or to put it another way, when resources such as memory are "owned",
they are controlled by a single thing, and that thing is the only thing
that acquires them and releases them. Non-owners might use the resources
with permission, but only the owner can release them.
If the owner does not disrupt the balance (IE, acquisitions exactly
match releases), and maintains everything inside in a predictable
acquisition-status (EG, a double-linked list might start and end empty,
add every node it allocates, and remove every node it deletes) then we
know that everything inside is not garbage.
The "predictable acquisition-status" need not be the same for everything
inside, it just has to be predictable. For instance, we might have a
convention that some pointers are valid and some aren't, and (of course)
only the valid pointers are in the acquired state. We just have to be
sure that we can always tell them apart, for instance by setting
pointers to a NULL value when we invalidate them.
> What does 'inside' mean?
Depends on exactly what your data structure is. Let's say it's a double
linked list. If you traverse the entire list, the nodes that you
encounter are inside, everything you don't encounter is outside.
But let me see if I can frame a more general answer: Inside == reachable
through a certain set of operations from a given starting point. EG, for
a dll, the operations might be get_prev and get_next and the starting
point might be the head node. Obviously you want to deal with sets of
operations that allow insideness to be controlled from a single point or
at least easily controlled.
> How is ownership transferred and/or controlled?
Transferred: The key is that a resource is owned by exactly one thing,
so if B wants to own something A owns, either A stops owning it (EG: the
node in question is removed from the dll), or B acquires something
equivalent (EG: B makes or receives another copy, which B then owns.)
Controlled: If I understand your question, it depends on what tools your
language gives you. Some languages of course will let you stomp all over
everything and never tell you. I prefer strong typing for control, where
a deep pointer is a different type than a shallow pointer, and only
legal operations can happen among them. But that's just a preference.
> What happens in the presence of exceptions?
Depends how far you unwind. If you unwind far enough that the owner
disappears, then it obviously can no longer own resources. Since it
should have no resources after it "dies", it must either release them or
give them to something else.
If you unwind out of acquisition/releasing routines, obviously you want
to correctly communicate in what state they left the resource.
> These are the kinds of things that need to be made precise in order to
> convert good ideas into a robust, workable _style_ of programming.
Thanks, that was much easier to answer.
Wow, we must really be approaching it from different perspectives.
You seem to want a more formal(?) academic(?) statement, which I'm at a
loss to frame. I probably have some implicit assumption, or assumptions,
that you don't. To me, as a programmer, hearing or saying "there's a top
structure that owns everything inside." expresses the idea precisely
enough. I'm not sure what it is you want me to make more precise.
Tom
>1: It's slow, due to count update overhead.
<more stuff about programming techniques snipped>
Of course, if you pare down the references to the minimum needed
and inc/dec as necessary, you'll have a fairly efficient system.
Unfortunately it won't be automatic.
I think it's important to maintain a distinction between reference
counting as accounting for explicit memory management and reference
counting collectors as a form of automatic memory management.
As a form of acounting, reference counts (or mark bits, or whatever)
are needed to track liveness. To properly deallocate memory, you
need this accounting information, the actual deletion code, and a
well-formed policy for your application about just who is responsible
for reclaiming memory. With allocated memory comes the responsibility
to deallocate that memory. That responsibility is handed off from
one part of the code to another (either explicitly or inside the
programmer's mind) until that object is deleted.
I believe this is what you were referring to when you wrote about
top structures and head pointers..
With an automatic memory manager, this responsibility along with the
actual deletion code stays inside the memory manager. The programmer
doesn't need to worry about transferring the responsibility to delete
objects because the memory manager will take of it.
So my take on the subject is as follows. If you're keeping track of
objects for explicit storage management (where you pass responsibility
for deleting objects back and forth) then reference-counting is yet
another tool for keeping track of accounting information. If you're
relying on your memory manager to keep track and deallocate dead
objects for you, then reference-counting isn't so hot...
Kevin
____
Kevin Warne / kev...@whistler.com / +1 (604) 932-0606
Suite 204, 1200 Alpha Lake Road, Whistler BC, CANADA, V0N 1B0
Warne's Garbage Collector is the best way to allocate memory in C++ under NT
There are some cases where one or more elements of the circular
structure can't be reclaimed for other reasons. A good example is the
case of 'properties' on 'property lists' of traditional Lisp (but not
usually Scheme) systems. Since someone can create a reference to a
symbol out of thin air (i.e., by calling '(read)', or by interning a
character string at run-time), symbols can _never_ be reclaimed by the
standard garbage collector if they have ever been given a print name.
(I have argued elsewhere that 'gensym' should _not_ produce symbols
with print-names, and the print-names should be given out the first
time one tries to print them.)
Therefore, the programmer will have to go to some explicit effort --
even in Lisp -- to reclaim certain structures, whether or not they
have cycles.
>>1: It's slow, due to count update overhead.
><more stuff about programming techniques snipped>
>
>Of course, if you pare down the references to the minimum needed
>and inc/dec as necessary, you'll have a fairly efficient system.
>Unfortunately it won't be automatic.
I'm not sure what you mean by 'automatic' in this context. If you
mean 'oblivious to the programmer's coding style', then I suppose that
you are right. But what system programmer is ready to give up his
right to explicitly manage certain resources in certain situations?
What _would_ be nice, would be some way to know at _compile time_ if
certain constraints were violated, so that one would immediately know
that his program would eventually fail at run-time due to premature
evacuation (GC), or a storage leak (object never gets reclaimed).
The 'linear' style of programming achieves this goal, and I expect
that there may be other styles of programming which have interesting
compile-time and/or run-time properties. I guess I'm not ready to
concede that all of the important memory management ideas were found
by 1960 (at least reference counting and GC; see CACM 1960 for both
Collin's and McCarthy's papers).
>As a form of acounting, reference counts (or mark bits, or whatever)
>are needed to track liveness. To properly deallocate memory, you
>need this accounting information, the actual deletion code, and a
>well-formed policy for your application about just who is responsible
>for reclaiming memory. With allocated memory comes the responsibility
>to deallocate that memory. That responsibility is handed off from
>one part of the code to another (either explicitly or inside the
>programmer's mind) until that object is deleted.
What seems to be needed is some way to indicate 'ownership
responsibilities' as part of the _type_ of an object, so that such
things can be useful at compile time. Obviously, if the rules are
simple enough for a programmer to learn and memorize, then there is
some hope that one could build them into compile-time tools to do the
same things.
>I believe this is what you were referring to when you wrote about
>top structures and head pointers..
>
>With an automatic memory manager, this responsibility along with the
>actual deletion code stays inside the memory manager. The programmer
>doesn't need to worry about transferring the responsibility to delete
>objects because the memory manager will take of it.
I'm not sure what fraction of all programmers is able (or ready) to
give up complete control of resource management to a 'black box'
system. One need only look to human economic systems to see that
'private ownership' of resources can often be considerably more
efficient in the utilization of the available resources than can
'system' (i.e., governmental) management.
Henry Baker
Read (192.100.81.1) ftp.netcom.com:/pub/hb/hbaker/README for ftp-able papers.
WWW archive: ftp://ftp.netcom.com/pub/hb/hbaker/home.html
************* Note change of address ^^^ ^^^^
OK. Actually I agree. Doing it without a lot of programmer effort, and
without introducing error opportunities is important. But so is opening
opportunities for efficiency.
Can what I describe be automated, or largely automated? I think at best
it could be largely automated, requiring a keyword or so and giving
compiler errors under conflicting conditions rather than runtime
failures. (Obviously only applicable to compiled languages)
Some optimizations can be automatic, like hoisting reference count
updates out of loops, or to the endpoints of an entire routine.
Others, like constraining a structure to only take references it can
own, would require communication from the programmer, though I'd love to
be shown wrong.
Perhaps the ideal would be to develop something with GC and then turn it
off when (if) you got around to fine-tuning it.
I think that future programming languages will have constructs for compile-time
behavior as well as run-time behaviour. The compile time behavior will be a
superset of what is handled now with makefiles, macros, preprocessors, etc.
Through code that executes at compile-time, developers will be able to specify
exactly how their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now, probably not...
>Perhaps the ideal would be to develop something with GC and then turn it
>off when (if) you got around to fine-tuning it.
Waitasec... The implication here is that GC is and will always be slower than
hand-coding all the deletes. That's just not true.
1. A collector deallocates a large number of objects all at once. Hand-coded
deletes happen 1 at a time. This makes a difference when writing the
memory manager. The deallocation costs for a good collector are minimal.
(e.g., copying + non-copying collectors, implicit reclamation, etc.)
The deallocation costs for a good explicit memory manager are about the
same (or slightly higher) as the allocation costs.
2. Deallocation is part of the critical path when using explicit memory
management. A spreadsheet recalc that allocates a lot of temporary
memory is going to have to deallocate that memory. And the user is going
to have to wait through that deallocation. A collector can defer collection
to application idle time, further decreasing the cost of deallocation.
4. For short-lived processes, a collection might not be required at all.
This dynamic optimization isn't available to applications that hard-code their
deletes.
5. Collectors can be made parallel. Parallel collectors can run on separate
processors, with the appropriate speed benefits for otherwise singlethreaded
applications.
6. Collectors don't need to be black boxes for speed-hungry developers. There
are many collectors with very flexible interfaces. Developers can use their
knowledge of a collector's operation to further tune their application. And
a good collector should be flexible enough to coexist with an explicit memory
manager.
7. Collectors just get faster as users add more memory.
Now obviously the boundary between what is and is not GC is very fuzzy. As is the
boundary between what is and is not automatic.
But reguardless of where you draw the line, I suggest that most applications today
would benefit from the use of GC.
Hmm, sounds like the granularity of reference/liveness is too coarse
in this case for proper collection (although it might be just right
for other reasons).
>I'm not sure what you mean by 'automatic' in this context. If you
>mean 'oblivious to the programmer's coding style', then I suppose that
>you are right. But what system programmer is ready to give up his
>right to explicitly manage certain resources in certain situations?
Indeed. Using a mixture of the two is probably the best policy right
now.
>The 'linear' style of programming achieves this goal, and I expect
>that there may be other styles of programming which have interesting
Are there any usuable languages yet for acutally programming in this
style? (I know... I should check out your FTP cache first before
asking...)
>compile-time and/or run-time properties. I guess I'm not ready to
>concede that all of the important memory management ideas were found
>by 1960 (at least reference counting and GC; see CACM 1960 for both
>Collin's and McCarthy's papers).
On the contrary. I think that memory management is just now beginning
to get interesting.
>I'm not sure what fraction of all programmers is able (or ready) to
>give up complete control of resource management to a 'black box'
>system. One need only look to human economic systems to see that
>'private ownership' of resources can often be considerably more
>efficient in the utilization of the available resources than can
>'system' (i.e., governmental) management.
That comparison is reversed (and probably not all that accurate anyways
as objects are a lot less complex than people).
I assume you're making a comparison between global optimization and
control as opposed to multiple distributed entities each optimizing
locally.
Using a collector, each method and object can locally allocate and use
memory without worrying about how to deallocate that memory. Memory
usage can become dynamic. For instance, a method may allocate a new
object and return that object as a matter of course. If the caller
doesn't use the returned object then it will be reclaimed automatically
at little cost.
Contrast this to existing C++ practice... The programmer (the gov't in
this example) has to set out a global allocation policy and manually
trace the lifespan of each object... If a class is reused in a
second application then this lifespan analysis has to be redone
and allocations retraced.
Yep. Bertrand Russell had one of them too. Only he spent
ages trying to make it precise -- and still only found out just
in time to slip a page into his book saying "Woops -- the whole
thing is flawed!"
You may think that a set is a collection of things,
but Russell showed vague, intuitive (naive) notions are misleading.
See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.
To compare a naive and formal statement, see the ARM
(naive) and the C++ Working Paper (attempts to be precise).
Examining those naive and hidden assumptions is important.
Helps in design too. Whenever I "feel" a piece of code
ought to work but I can't explain exactly why -- guess where
the bug pops up?
--
JOHN (MAX) SKALLER, INTERNET:max...@suphys.physics.su.oz.au
Maxtal Pty Ltd,
81A Glebe Point Rd, GLEBE Mem: SA IT/9/22,SC22/WG21
NSW 2037, AUSTRALIA Phone: 61-2-566-2189
Traditional Lisp macros, compiler-macros and advisors can already
fulfill some of these roles. It would be nice if we could better
formalize some of these tasks.
>Waitasec... The implication here is that GC is and will always be slower than
>hand-coding all the deletes. That's just not true.
>
>1. A collector deallocates a large number of objects all at once. Hand-coded
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This depends upon the collector, but is generally true.
> deletes happen 1 at a time. This makes a difference when writing the
> memory manager. The deallocation costs for a good collector are minimal.
> (e.g., copying + non-copying collectors, implicit reclamation, etc.)
> The deallocation costs for a good explicit memory manager are about the
> same (or slightly higher) as the allocation costs.
'Finalization' is becoming a more and more important issue, so the
interaction of GC and finalization needs to be clarified. With explicit
deallocation, the timing of finalization is usually better defined.
>2. Deallocation is part of the critical path when using explicit memory
> management. A spreadsheet recalc that allocates a lot of temporary
> memory is going to have to deallocate that memory. And the user is going
> to have to wait through that deallocation. A collector can defer collection
> to application idle time, further decreasing the cost of deallocation.
There's no reason why 'lazy' and/or parallel deallocation can't be done
with explicit deallocation.
>4. For short-lived processes, a collection might not be required at all.
> This dynamic optimization isn't available to applications that hard-code their
> deletes.
You have just made a pretty good argument for more tailored memory
management strategies.
>5. Collectors can be made parallel. Parallel collectors can run on separate
> processors, with the appropriate speed benefits for otherwise singlethreaded
> applications.
Other kinds of memory management schemes can also be made parallel.
'Decomposers' can easily be run as a separate thread -- especially
in the context of 'lazy' deallocation.
>6. Collectors don't need to be black boxes for speed-hungry developers. There
> are many collectors with very flexible interfaces. Developers can use their
> knowledge of a collector's operation to further tune their application. And
> a good collector should be flexible enough to coexist with an explicit memory
> manager.
Good idea. Now how to develop good, high-level protocols for communication
with an automatic resource manager.
>But reguardless of where you draw the line, I suggest that most applications today
>would benefit from the use of GC.
Although much work has been done on GC for 'distributed' applications,
I don't think that the benefits of GC for these applications have been
so clearly demonstrated yet.
See Ellis and Detlefs paper on GC in C++. They try to deduce what
subset of C++ will work with a conservative garbage collector.
Just a minor correction: We designed a safe subset that would work
with a wide range of garbage collection algorithms, not just
conservative algorithms.
pardon me for asking the seemingly obvious, but where would I find this
paper?
#<Erik>
--
Microsoft is not the answer. Microsoft is the question. NO is the answer.
Unfortunately the code required to signal 'delete me please' takes
about as long to execute as does the average case deallocation.
(Assuming the presence of a slick memory manager that will delete objects
in ~ 15 cycles) With a collector this overhead doesn't exist.
>You have just made a pretty good argument for more tailored memory
>management strategies.
There are many different techniques for creating and destroying objects.
Using a GC adds a few techniques. By having access to and using both
automatic and explicit memory managers, developers should be able to
create better applications as a result.
>Although much work has been done on GC for 'distributed' applications,
>I don't think that the benefits of GC for these applications have been
I think there's a lot more work to do to create decent environments
for creating distributed applications. Making a distributed GC work
well would be one step. It's so easy in a distributed environment to
orphan processes and objects, it'd be nice to have those partitioned
objects to take care of themselves.
> Contrast this to existing C++ practice... The programmer (the gov't in
> this example) has to set out a global allocation policy and manually
> trace the lifespan of each object... If a class is reused in a
> second application then this lifespan analysis has to be redone
> and allocations retraced.
>
Howdy,
Well put. I think it might be important here on comp.lang.c++
to emphasize that GC is not just (1) lazy programming or even
(2) useful for implementing certain algorithms. (Both might be true
for specific cases).
GC, i.e. "automatic storage management", is a key player
in how easily system modularization and code reuse is achieved. In
my experience at various run-o-the-mill coding shops, modularization
and reuse has often broken down over issues of resource management.
For these non-rocket scientists, GC would have been a god send and
would have helped them leap the barrier between OO hype and practice.
OTOH, I do not think C/C++ should be retrofitted with
GC - breaks the spirit of the langauges. Which leaves a burning
question.....
=============================================
Scott McLoughlin
Conscious Computing
=============================================
I think that future programming languages will have constructs for
compile-time behavior as well as run-time behaviour. The compile
time behavior will be a superset of what is handled now with
makefiles, macros, preprocessors, etc. Through code that executes
at compile-time, developers will be able to specify exactly how
their classes and objects can be used. Given such objects and
metaobjects, I think such behaviour is possible... Right now,
probably not...
What you are describing is not `future programming languages'. Such
languages exist. One, called Common Lisp, was actually getting a fair
amount of use a decade ago. Unfortunately, most people gave up on it
right about the time that significant improvements in compiler and
garbage collection technology were becoming standard for conventional
architecture Common Lisp implementations.
Code is data. Data is code. Lisp leverages this fact while glorified
assembly languages hide it. You should not need seperate languages
(such as preprocessor-speak) to define new code transformations.
--
Rick Busdiecker <r...@lehman.com> Please do not send electronic junk mail!
Lehman Brothers Inc.
3 World Financial Center "The more laws and order are made prominent, the
New York, NY 10285-1100 more thieves and robbers there will be." --Lao Tzu
That's my experience too. It's sad.
>(ii) the average C (or similar language) programmer spends more time
>debugging than coding (*);
Often true, but not by as large an extent as you might think.
>(iii) the overwhelming majority of bugs in C (or similar language)
>programs are problems with memory allocation (walking off the end of
>the world; early deallocation; leaks) (*);
Not if the programmer is careful. I hardly ever have this type of bug.
{...}
>((*) Asterisks indicate bald assertions: I can't cite anything to back
>these up, but it's my experience).
{...}
> But in
>these days of megabytes of core, hand crafting (for example) memory
>management is at best an almost masturbatory vanity, and at worst a
>gross waste of resources (since the time you waste getting the bloody
>thing to work is far more valuable than the totality of the processor
>time it will ever save).
Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.
>Using automatic store management of whatever kind removes the single
>major cause of error in code, and allows the programmer to concentrate
>on the logic of the program. It thus greatly speeds application
>development, and enhances application reliability.
As I've indicated above, memory management only a major cause of errors
if the programmer is sloppy. For good programmers, it's just another
arena for bugs to display themselves in.
>There is a place, of course, for low level languages like C:
C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.
> These
>things will be used by millions of users and will be required to run
>for long periods perfectly reliably. Such programs are rare,
>thankfully, because producing code of this quality is hard and far
>fewer people can do it than think they can.
Ideally, EVERY program would operate reliably for long periods of time.
Every programmer should aim for that, and not just when writing kernels.
>Building code is an engineering problem. Suppose you are building a
>vehicle for a speed record attempt, you might very well argue 'right,
>we'll weld this assembly by hand, because the very best craftsmen can
>work to closer tolerances than a robot can, and this assembly is
>critical'. However, if you were production engineering the next Ford
>Mondeo, and you suggested hand welding the body shell, you'd be out of
>a job.
A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.
-andrew
Very well put. Add to (iii)...more so if weighted by cost.
---
NeXTMail:peter...@corona.com
Corona Design, Inc.
P.O. Box 51022
Seattle, WA 98115-1022
From: cs...@csv.warwick.ac.uk (Andrew Main)
Newsgroups: alt.lang.design,comp.lang.c,comp.lang.c++,comp.lang.lisp
Date: 29 Nov 1994 20:17:50 -0000
Organization: University of Warwick, Coventry, UK
Lines: 76
NNTP-Posting-Host: holly-fddi.csv.warwick.ac.uk
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
...
Hand-crafted memory management is *essential* in some cases. There is
no automatic allocation/deallocation system that allows programs to do
everything they can do in C (and certain other languages). In a
functional language, this isn't a problem, but functional languages
really aren't appropriate for most applications.
^^^^
C/C++ aren't appropriate for most applications either :)
...
C is not a low-level language. What I think you mean is that it allows
low-level access to many OS functions.
...
As do most serious Common Lisp implementations.
...
A valid argument, but again, you miss the point of explicit memory
management. There are some things that implicit memory management
*can't* do. Sure, you might not want to do such things, and the way
that functional languages are used you don't want to. But there is a
need for this ability.
That is correct. I am just now writing a CL library that does need
explicit memory management for a lot of data structures I build "on
the fly", but that cannot be collected since they get memoized in a
has table. I can build it easily relying on the garbage collector on
as "as needed" basis.
Happy Lisping
Not if you buy them in India or parts of eastern Europe. Or many other non-US,
non-Euro countries. (Read _The Decline of the American Programmer_ or whatever
it's called)
(They're paid pretty damn badly in the UK too -- but then just about everyone
is)
--
: stephen benson : : : : : : : : step...@scribendum.win-uk.net
| Look, we live in a world in which,
:
| (v) programmers are expensive and steadily getting more
| so.
[Stephen Benson]
| Not if you buy them in India or parts of eastern Europe. Or many other
| non-US, non-Euro countries. (Read _The Decline of the American
| Programmer_ or whatever it's called)
Edward Yourdon: Decline and Fall of the American Programmer, ISBN 0-13-203670-3
last I heard, as in: when I got loads of mail from several Indian companies
thinking I'd like use their cheap Windoze programming services just because
my company has "software" in its name, they are inexpensive all right, but
also getting more expensive much faster than their American counterparts.
must be the rise in demand.
Yourdon basically claims that since they got into the programming business
much later than American programmers, they use CASE tools, which is
Yourdon's favorite solution to the software crisis. there is probably some
truth to his claims, and from what I have seen of the C++ world, it is nigh
impossible to write C++ code without a machine doing the work for you.
I have yet to see comparisons of productivity between a CASE/C++ programmer
vs a LISP programmer, but they claim five times increase in productivity,
and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
and ten times increase in productivity with LISP. such figures are
probably way too round to be accurate, but if there's truth to them, you'd
still be way ahead with a real programming language compared to a crippled
language, however good the crutches.
#<Erik>
--
The check is in the mail. This won't hurt. You'll find it on the Web.
> Yourdon basically claims that since they got into the programming business
> much later than American programmers, they use CASE tools, which is
> Yourdon's favorite solution to the software crisis. there is probably some
> truth to his claims, and from what I have seen of the C++ world, it is nigh
> impossible to write C++ code without a machine doing the work for you.
Really? That's news to me.
> I have yet to see comparisons of productivity between a CASE/C++ programmer
> vs a LISP programmer, but they claim five times increase in productivity,
> and that's mostly from the dropping number of bugs, for the CASE/C++ combo,
> and ten times increase in productivity with LISP. such figures are
> probably way too round to be accurate, but if there's truth to them, you'd
> still be way ahead with a real programming language compared to a crippled
> language, however good the crutches.
The closest thing I've seen to a believable study was by Howard Trickey,
who implemented a fairly substantial program (O(10K lines)) in Lisp and
C++ simultaneously and took notes. His conclusion was that in that
particular context, choice of language didn't affect his productivity much.
Yes, it's only one data point and yes one can find all kinds of flaws
in the methodology, but I haven't seen any evidence that isn't at least as flawed.
--
--Andrew Koenig
a...@research.att.com
"Haskell vs. Ada vs. C++ vs. Awk vs. ... An Experiment in Software
Prototyping Productivity" by Paul Hudak and Mark P. Jones,
Available via anonymous ftp as:
ftp://nebula.cs.yale.edu/pub/yale-fp/papers/NSWC/jfp.ps
... in which there are also references to the original write-up of the
prototyping experiment.
Giuliano.
I am obliged to point out that, given AT&T's very close past and
present relationship with C++ and strong private and public commitment
to it, any refutation of Mr. Trickey's ancient memorandum and
Mr. Koenig's comments above, including any contrary evidence, would
very likely be stamped AT&T Proprietary--Not for Public Disclosure.
AT&T employees cannot be considered unbiased, unconstrained
participants in discussions of this kind. I can only suggest that
readers learn CLOS and C++ for themselves, try both languages on
appropriate and significant applications, making use of development
environments appropriate to each, and then come to their own
conclusions.
If this thread is to continue at all, I suggest that it not be
cross-posted to widely disparate newsgroups.
--
Lawrence G. Mayka
AT&T Bell Laboratories
l...@ieain.att.com
Standard disclaimer.
> I am obliged to point out that, given AT&T's very close past and
> present relationship with C++ and strong private and public commitment
> to it, any refutation of Mr. Trickey's ancient memorandum and
> Mr. Koenig's comments above, including any contrary evidence, would
> very likely be stamped AT&T Proprietary--Not for Public Disclosure.
> AT&T employees cannot be considered unbiased, unconstrained
> participants in discussions of this kind.
No one is unbiased or unconstrained.
For the record, I will say that if a refutation of Trickey's paper
exists, I have not seen it. Moreover, I have no reason to believe
one exists, if only because it is hard to refute experience.
I will also say that I work on C++ because I believe it is a
worthwhile endeavor, not because anyone is forcing me to do so.
And it is *far* from the only programming language I speak fluently.
--
--Andrew Koenig
a...@research.att.com
| I agree with LGM, this proves nothing, but is simply some heresay which
| is contrary to Mr. Trickey's heresay.
I have read Howard Trickey's report in SIGPLAN 23(2). "heresay" is a good
term to cover the article. there is not enough information in this paper
to conclude anything, not even restricted to the specific case under study.
the lack of rigor in the presentation of both arguments and conclusion, the
lack of _relevant_ detail in the description of each of the versions of the
program, and the unwarranted generalizations individually and together make
this paper utterly worthless. it is also very, very old, and while its
relevance may have been non-zero at the time it was written, today it is
not worth the paper it was printed on.
according to the author's own data, he has 42KLOC experience in procedural
languages (Pascal, C, PL/I, and PDP/11 assembly), and 3KLOC experience in
Franz Lisp. "Of course one reason why the two versions are about the same
size is that my desire to have identically behaving programs gave a strong
incentive to look at one version when coding the other. Perhaps a True
Lisper has a completely different, more compact coding style. ... I don't
think there are any places where one language would have dictated a vastly
different design than the other." the author is terminally confused on the
difference between language and implementation. "However, for finding
syntax errors, C++ has an [sic] big advantage. If a 500 line C++ file has
errors, you will find out about all of them (up to a limit) in about 20
seconds. The Lisp compilation aborts after he first error, so it can take
many minutes to find them all." this author is also hung up in the time
spent in garbage collection, without giving any kind of statistics for the
largely hidden memory allocation costs of C++. "A rather bigger graph took
109 seconds in C+= and 805 seconds in Lisp, because Lisp needed 16 garbage
collections to finish." the single most damaging aspectd of garbage
collection might indeed be that systems report the time spent in garbage
collection separately. a telling comparison is the following: "If you bind
new Lisp locals where logical, a function that is easily comprehensible in
C++ looks too long in Lisp." a fundamental difference between the two
languages is precisely how and when you create functions. the author
dismisses macros, or what I understand to be macros, anyway: "Perhaps a
Lispa advocate would counter that it isn't difficult to customize the Lisp
syntax to whatever one desires. My experience is that this isn't a good
idea, because tools like the debugger operate on a more canonical
representation so it is best to be familiar with your program in "standard"
Common Lisp. It is also to the benefit of other readers not to use a
strange syntax." it may seem that the author has failed to do the simplest
thing in comparing languages and compilers: turn on optimization. "The
compiler flags for both versions were as I set them for normal development:
-g for C++ and the defaults for Lisp. The C++ version could be sped up
using optimization (-O), an the Lisp version could be sped up using
`fast-entry' (avoids checks for the correct number of arguments)." the
author uses only a small fraction of the Lisp primitives: "It took me about
two weeks to design and implement C++ things that were primitives in Lisp;
after that, the added repertoire of Lisp didn't make much of a difference."
I conclude that we have a C/C++ pro and a Lisp amateur comparing apples and
oranges, and coming out with what he knows best. zero information value.
worse yet, those who refer to this paper must do so for political reasons,
or marketing, which amounts to the same thing, and they can hardly be
intellectually honest about it.
> dismisses macros, or what I understand to be macros, anyway: "Perhaps a
> Lispa advocate would counter that it isn't difficult to customize the Lisp
> syntax to whatever one desires. My experience is that this isn't a good
> idea, because tools like the debugger operate on a more canonical
> representation so it is best to be familiar with your program in "standard"
> Common Lisp. It is also to the benefit of other readers not to use a
> strange syntax." it may seem that the author has failed to do the simplest
There's a simple solution: document the macros and treat them as
additions to the language. Even C programmers do that! How else
can you use tools like lex, yacc, or embedded SQL? Someone had to
document these tools.
Forth, Lisp, and even Smalltalk programmers can look at the language
system they use for programming in a slightly different way, by seeing
that the system itself is a program written in the language it compiles.
(See PJ Brown's book Writing Interactive Language Compilers and
Interpreters for a definition of "compiler" or "interpreter".)
I often suspect that mistakes like these could be unfamiliarity
with compiler theory.
--
CommUnity: ftp://ftp.demon.co.uk/pub/archives/community
Me: http://cyber.sfgate.com/examiner/people/surfer.html
> In article <BUFF.94De...@pravda.world>,
> Well, cf Erik Naggum's response to the above, Dick Gabriel in
> particular and the Lucid team in general must be taken as LisP
> experts, so that may to some extent weight against their experience as
> reported above. Nevertheless, if this statement is something Dick has
> published and provided some quantifiable support for, it would be
> extremely useful material. There is so *damn* *little* serious study
> of the comparative productivity of differing programming tools.
It seems like computer science is stuck with surveys here. Is there any
more scientific approach?
>
> No, I'm not expecting any study which "conclusively proves" that any
> particular language is "the best" for any but very special purposes
> (and indeed I'd take with a large pinch of salt any study which
> claimed to); but it's useful to hear from people who have a wide range
> of expertise in more than one language.
>
> My own view, for what it's worth, is that LisP is probably more like
> eight or ten times as productive as C, in terms of delivered user
> level functionality per programmer hour (I don't know enough C++ to
> comment). However I'm biased towards LisP; although I used BCPL before
> LisP, LisP is far and away my preferred language. My experience runs
> to about 80KLOC in C, probably five times that in various LisPs (but
> mainly Interlisp/LOOPS).
>
This is close to my experience. I have mentioned that there feels like a 5
to 1 productivity ratio in Lisp to C++ to my colleagues who are more or less
experts in both Lisp and C++ and the usual response is that it is too
conservative an estimate. One part of the big ratio is attributed to the
lack of a good C++ library. I created a Lispy C++ library for myself which
includes lists, dynamic typing and other nice Lispisms, and this helped
considerably but the productivity ratio is still pretty high. I think an
even bigger part of the ratio is due to the lack of the ability of C++ to do
what is referred to as exploratory programming. If exploratory programming
is important to developing the application then this makes the ratio really
high. For example, in developing my Go program in Lisp the ability to do
exploratory programming feels like it shoots up the ratio to more like 100
to 1, seriously!
This leads me to believe that for a complex enough application, like a Go
program, it is better to develop it in Lisp then recode in C++ in the last
two weeks before delivery. Or better yet compile efficiently (enough) in
Lisp when that becomes more feasible.
It is interesting to muse with the powerful software automation of the
"software ICs" paradigm, such as NEXTSTEP, in the future perhaps the only
interesting programming will be consuming software ICs, producing software
ICs and otherwise exploratory programming. Contrary to popular opinion,
perhaps Lisp (or Lisp like languages) will be one of the few surviving human
written programming languages of the future.
--
William P. Vrotney - vro...@netcom.com
Anyway, I found the claims rather surprising, so here is a challenge. The
following toy program loads a 256x256 8 bit grayscale image from a file,
inverts it, and saves it to another file. I consider myself to be an
intermediate-level C++ programmer. The program took me an hour (almost to
the minute) to design, code, debug and test (Ok, I didn't test it very much).
I look forward to seeing the previous posters' Lisp implementations of the
same. It should only take them 6 - 30 minutes to knock them together, less
if they are reasonably experienced in Lisp.
// Toy C++ program that reads 256x256 gray-scale image from a file,
// and writes out the "negative" of the image.
#include <stdio.h>
#include <assert.h>
#define MEM_ALLOC_ERROR -1
#define FILE_OPEN_ERROR -2
#define FILE_READ_ERROR -3
#define FILE_WRITE_ERROR -4
const int N = 256;
class Image
{
public:
Image();
~Image();
int LoadImage(char* filename);
int SaveImage(char* filename);
void Invert();
private:
int width;
int height;
unsigned char* data;
};
Image::Image()
{
data = NULL;
};
Image::~Image()
{
delete data;
}
int Image::LoadImage(char* filename)
{
FILE* f;
assert(data == NULL);
if ((f = fopen(filename, "rb")) == NULL)
return FILE_OPEN_ERROR;
width = N;
height = N;
if ((data = new unsigned char[(size_t)width * height]) == NULL) {
fclose(f);
return MEM_ALLOC_ERROR;
}
if (fread(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
delete data;
fclose(f);
return FILE_READ_ERROR;
}
fclose(f);
return 0;
}
int Image::SaveImage(char* filename)
{
FILE* f;
assert(data);
if ((f = fopen(filename, "wb")) == NULL)
return FILE_OPEN_ERROR;
if (fwrite(data, sizeof(unsigned char), (size_t)width * height, f)
!= (size_t)width * height) {
fclose(f);
return FILE_WRITE_ERROR;
}
fclose(f);
return 0;
}
void Image::Invert()
{
unsigned char* p;
assert (data);
for (p = data; p != data + width * height; p++)
*p = 255 - *p;
}
int main(int argc, char* argv[])
{
Image image;
if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);
return -1;
}
if (image.LoadImage(argv[1]) != 0) {
fprintf(stderr, "Error loading image from file %s\n", argv[1]);
return -1;
}
image.Invert();
if (image.SaveImage(argv[2]) != 0) {
fprintf(stderr, "Error saving image to file %s\n", argv[2]);
return -1;
}
return 0;
}
--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.
--
Nick Mein
MSc Student
Dept of Computer Science
University of Otago
Dunedin
New Zealand.
20 Minutes. I do not touch type. I tried to write a "multidimensional"
'read-sequence' (a function with that name will be in the upcoming CL
ANSI, the first ANSI OO Language standard), decided it was not worth
the effort, went back to the original version, corrected a few errors
(there are still a few around), checked CLtL2 a couple of times
(actually more) and looked up some documentation strings on my
faithful AKCL.
Pretty good, isn't it? :)
Given that I do not touch type (and I am *really* slow at typing), my
personal speedup is about 4x, give or take.
Moreover, the CL program does more than the C++ program and ILISP
under Emacs makes it very easy to modify one function at a time and
reload and recompile it in the Lisp system without recompiling the
whole file. I did not use this feature now, but it is definitively an
extra speedup for larger projects.
Happy Lisping
Marco Antoniotti
PS. For the record, I am fluent in C, C++ and Ada.
-------------------------------------------------------------------------------
;;; -*- Mode: CLtL -*-
(defconstant +N+ 256)
(defclass image ()
((data :accessor image-data
:initform (make-array (* +N+ +N+)
:initial-element 0
:element-type '(unsigned-byte 8)))))
(defgeneric load-image (image filename))
(defgeneric save-image (image filename))
(defmethod load-image ((im image) (filename string))
(with-open-file (in-stream filename
:direction :input
:element-type '(unsigned-byte 8)
:if-does-not-exist :error)
(read-sequence (image-data im) in-stream (* +N+ +N+))))
(defmethod save-image ((im image) (filename string))
(with-open-file (out-stream filename
:direction :output
:element-type '(unsigned-byte 8)
:if-does-not-exist :create
:if-exists :supersede)
(write-sequence (image-data im) out-stream (* +N+ +N+))))
;;; 'fread' is a C library function that does 'buffered' input. The
;;; upcoming CL ANSI standard will have a counterpart called
;;; 'read-sequence'.
;;; I could have looked up the way AKCL or CMUCL interface with the C
;;; library, but I decided to just write a stupid version of it.
(defun read-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(setf (aref array-data i) (read-byte stream t))))
;;; Same comments.
(defun write-sequence (array-data
stream
&optional
(dims (first (array-dimensions array-data))))
(dotimes (i dims)
(write-byte (aref array-data i) stream)))
(defgeneric invert (image))
(defmethod invert ((im image))
(with-slots (data) im
(dotimes (i (* +N+ +N+))
(setf (aref data i) (- 255 (aref data i))))))
(defun test (infile outfile)
(let ((im (make-instance 'image)))
(load-image im infile)
(invert im)
(save-image im outfile)))
--
Marco Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu
...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht
: Anyway, I found the claims rather surprising, so here is a challenge. The
: following toy program loads a 256x256 8 bit grayscale image from a file,
: inverts it, and saves it to another file. I consider myself to be an
: intermediate-level C++ programmer. The program took me an hour (almost to
: the minute) to design, code, debug and test (Ok, I didn't test it very much).
: I look forward to seeing the previous posters' Lisp implementations of the
: same. It should only take them 6 - 30 minutes to knock them together, less
: if they are reasonably experienced in Lisp.
I do most of my programming in C and C++, although I would rather be working
in Lisp. I'd never opened a file in Common Lisp before now, so it took
me a few minutes to thumb through my CLtL2 and find the right keywords for
with-open-file. Here's my version of the program (untested):
(defun invert-file (input-file output-file)
(with-open-file (input-stream input-file
:direction :input
:element-type (unsigned-byte 256))
(with-open-file (output-stream output-file
:direction :output
:if-exists :supersede
:element-type (unsigned-byte 256))
;; read and write of the image header would go here
;; This is bad, I should go until the end of the file, not
;; count out how many bytes. It's not as flexible this way.
(dotimes (i (* 256 256))
(write-byte (- 255 (read-byte input-stream)
output-stream))))))
Well, that's it. I don't know if it's a fair comparison, the C++ version
had a lot of junk in it. If I were quickly prototyping in C:
(Note the emphasis on quickly!)
#include <stdio.h>
main (int argc, char **argv)
{
FILE *in = fopen(argv[1]);
FILE *out = fopen(argv[2]);
unsigned char buffer;
while (!feof(in)){
fread(buffer, 1, 1, in);
buffer = 255 - buffer;
fwrite(buffer, 1, 1, out);
}
}
Of course, I wouldn't recommend either one of these as examples in serious
software design. For that, I'd recommend grabbing an existing package,
like ImageMagick for images, and simply extending that to suit your needs.
mike
>
> Richard Billington, Simon Brooke, William Vrotney, Jim McDonald and Christopher
> Vogt have written that programming in Lisp is 2 - 10 time more productive
> than programming in C (two of the posters professed to having little
> experience with C++). Surprisingly, no suggestion is made that this
> productivity advantage is restricted to any particular problem domain (perhaps
> Lisp is 10x more productive for an NLP system, but only 2x more productive
> if you are writing a Unix device driver?).
I don't think that the productivity ratio is related to particular problem
domains as much as it is to the complexity of the problem domains. And so I
would agree with the gist of your last sentence in the above paragraph, but
for different reasons (I wouldn't even guess 2x but more like 1x if that for
the Unix device driver).
Also I should add that I think that it is more than just the complexity of
the problem domain that increases the ratio. Uncertainty is another.
Uncertainty of the algorithm and data representation. This is where
exploratory programming methods are also very helpful. Although complexity
and uncertainty of the problem domain frequently go hand and hand they don't
have to. For example I helped develop an AI scheduling algorithm in Lisp
that took about six months to develop with a lot of intensive exploratory
programming. But the final algorithm turned out to be coded in only about 6
pages of code. Hence the final algorithm was not very complex but the
uncertainty or finding the correct abstractions was the hard part. Now that
I think back at this task, doing the same thing in C++ seems down right
silly. However I could easily recode the algorithm in C++ today in a few
hours.
>
> Anyway, I found the claims rather surprising, so here is a challenge. The
> following toy program loads a 256x256 8 bit grayscale image from a file,
> inverts it, and saves it to another file. I consider myself to be an
> intermediate-level C++ programmer. The program took me an hour (almost to
> the minute) to design, code, debug and test (Ok, I didn't test it very much).
> I look forward to seeing the previous posters' Lisp implementations of the
> same. It should only take them 6 - 30 minutes to knock them together, less
> if they are reasonably experienced in Lisp.
>
> - Negate image program deleted -
In my understanding of the productivity ratio this last paragraph does not
reflect any sort of important proof. Writing a small piece of a BITBLTer is
so well understood and so non-complex that I can not see any advantage to
simply coding this thing in Lisp versus coding it in C++. You only have one
Struct, Image, with two slots and a couple of methods. This hardly
qualifies as a complex program. In fact being a die hard Lisper that I am I
would probably still write this thing in C since it looks like an OS
utility. We already have a very elegant language for writing OS utilities
called C. I wouldn't even waste the time doing this in C++ unless there was
a requirement to provide an object interface.
By the way, I for one lose, it would probably take me more than 30 minutes
from start to finish to write your program in Lisp, so I will not even
start. But, if you really want to demonstrate the effect of the
productivity ratio lets pick a challenge that shows off Lisp and not one
that should obviously be done in C. Here is my challenge:
"You are given spectral sound files representing frequency, amplitude,
bandwidth and time of sounds picked up by hydrophones in the ocean. You are
looking for torpedo sounds but the hydrophones pick up the sounds of the
ship dragging the hydrophones and other miscellaneous noise in the ocean.
Develop an existing standard blackboard system shell and source model and
discover the blackboard level abstractions that allow the top level of the
blackboard to identify a torpedo source specified in the source model.
Hint1: Use the harmonic structure of sound sources for source modeling and
identification. Hint2: You need to consult with human experts on torpedo
sounds and do many experiments with that data to develop the rules."
This was one of my last tasks that I did in approx. 6 months in Lisp. How
long would it take you to do this in C++ given that you do not have our
results? I am very serious about this challenge to make a point but I hope
that you do not start it because I predict that you, assuming you get some
guidance of where to start, will fumble around for the first three months
only to discover that you need another 3 months to reinvent and implement
that portion of Lisp in C++ to give you the interactive visibility you need
to do the experiments. This prediction is based on the AI programs that I
developed in C++ instead of Lisp.
| Anyway, I found the claims rather surprising, so here is a challenge.
| The following toy program loads a 256x256 8 bit grayscale image from a
| file, inverts it, and saves it to another file. I consider myself to
| be an intermediate-level C++ programmer. The program took me an hour
| (almost to the minute) to design, code, debug and test (Ok, I didn't
| test it very much). I look forward to seeing the previous posters'
| Lisp implementations of the same. It should only take them 6 - 30
| minutes to knock them together, less if they are reasonably experienced
| in Lisp.
hmmm.
(defun invert-image (image-path inverted-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :input)
(with-open-file (inverted inverted-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)
(dotimes (i (* 256 256))
(write-byte (- 255 (read-byte image)) inverted)))))
4 minutes 50 seconds, including testing on a file made in by this function
(defun make-image (image-path)
(with-open-file (image image-path
:element-type '(unsigned-byte 8)
:direction :output
:if-exists :supersede)
(dotimes (i 256)
(dotimes (j 256)
(write-byte j image)))))
add 2 minutes to read and understand your code.
I am not proficient with file I/O in Common LISP, so had to look up
`with-open-file' to get the arguments right.
the functions are also moderately fast.
* (time (make-image "/tmp/test"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:
Evaluation took:
0.27 seconds of real time
0.26 seconds of user run time
0.01 seconds of system run time
0 page faults and
1344 bytes consed.
NIL
* (time (invert-image "/tmp/test" "/tmp/test.out"))
Compiling LAMBDA NIL:
Compiling Top-Level Form:
Evaluation took:
0.56 seconds of real time
0.54 seconds of user run time
0.02 seconds of system run time
0 page faults and
3944 bytes consed.
NIL
CMU CL 17f on a 50 MHz SPARCstation 2. /tmp is in memory.
#<Erik>
--
requiescat in pace: Erik Jarve (1944-1994)
But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.
In addition to initial implementation, what about maintainability? Which
language is more reliably modified when the requirements evolve?
Opinions?
--
--------------------------------------------------------------------------
All opinions are MINE MINE MINE, and not necessarily anyone else's.
d...@phlan.sw.stratus.com | "Laddie, you'll be needing something to wash
(Dave Toland) | that doon with."
|> Anyway, I found the claims rather surprising, so here is a challenge. The
|> following toy program loads a 256x256 8 bit grayscale image from a file,
|> inverts it, and saves it to another file. I consider myself to be an
|> intermediate-level C++ programmer. The program took me an hour (almost to
|> the minute) to design, code, debug and test (Ok, I didn't test it very much).
|> I look forward to seeing the previous posters' Lisp implementations of the
|> same. It should only take them 6 - 30 minutes to knock them together, less
|> if they are reasonably experienced in Lisp.
|>
|>
|> // Toy C++ program that reads 256x256 gray-scale image from a file,
|> // and writes out the "negative" of the image.
If I were to write a C++ version, I'd write this:
#include <stdio.h>
main(int argc, char *argv[])
{
int x, y;
FILE *in, *out;
if (argc != 3) {
fprintf(stderr, "Usage %s <infile> <outfile>\n", argv[0]);
exit(-1);
}
in = fopen(argv[1], "r");
if (in == NULL) {
fprintf(stderr, "Cannot open \"%s\" for reading.\n", argv[1]);
exit(-1);
}
out = fopen(argv[2], "w");
if (out == NULL) {
fprintf(stderr, "Cannot open \"%s\" for writing.\n", argv[2]);
exit(-1);
}
for (y = 0 ; y < 256 ; y++)
for (x = 0 ; x < 256 ; x++)
putc(255 - getc(in), out);
fclose(in);
fclose(out);
}
Took all of three minutes to type in and compile. Nick's program illustrates
one of the things about the current batch of C++ programmers that bugs the
daylights out of me. There seems to be this feeling that if your using an "object
oriented" language, EVERYTHING has to be an object. What a total bunch of *&%^$!
The other thing Nick's program illustrates is the tendendcy to over spec
and over build. If all you need is a double for loop, just write a double for
loop!
Mike McDonald m...@titan.ess.harris.com
Here is the quote i've seen. It is definitely worth reading the entire
article, and the entire "Critic at Large" series:
"My exerience with C++ development started with a group just learning C++
and ended with a fairly expert group. At the start I estimated the
productivity advantage of Lisp/CLOS over C/C++ at a factor of 3 to 4; at
the end I set ist at 30%."[p. 91]
R.P. Gabrial, Productivity: Is there a silver bullet?, Journal of Object
Oriented Programming, March-April, 1994, p. 89-92.
if one used lisp). I agree with LGM, this proves nothing, but is simply
some heresay which is contrary to Mr. Trickey's heresay.
--
Ken Anderson
Internet: kand...@bbn.com
BBN ST Work Phone: 617-873-3160
10 Moulton St. Home Phone: 617-643-0157
Mail Stop 6/4a FAX: 617-873-2794
Cambridge MA 02138
USA
But which language scales better to *large* problems? Maybe Lisp gurus will
disagree with me, but I believe that C++ is better suited to very large
applications.
I believe that history is against you here. I believe that there is
now general agreement that the major problem with the Lisp Machine
architecture being marketed to the general computer world was that the
Lisp Machine was marketed as the ONLY machine that could handle the
very largest and most complex problems. This was bourne out by the
many examples of Symbolics or Explorers being used for the massive
financial analysis expert systems, telephone-system routing, military
command and control modeling...
The problem was that there were still a lot of "simple" applications
that were being done in languages like C and C++, but the developers
didn't want to have to switch from the Sun/PC/Mac being used for the
"simple" problems to the Lisp Machine for the really hard stuff.
The C systems were better able to handle "Hello World".
These simple problems were often the entry-level prototypes for the
truely massive problems, but by the time the power of the Lisp machine
was actually needed, the problem was so entrenched in C (or whatever)
that the switch to the power tool was not perceived as feasable by
management. These projects might then fail due to the poor choice of
language with no acknowledgement that a different language might
actually have helped.
If the project was in C and failed, the conclusion was "It was too
large. No one could have done it."
If the project was in Lisp and failed, the conclusion was "Lisp is
just not a suitable language."
The number of truely massive projects accomplished on the Lisp
Machines is some evidence that these conclusions are both false, but
It would be interesting to get some sort of survey of "very large
applications" to see if a pattern would emerge favoring some language
or design technology.
Skip Egdorf
h...@lanl.gov
It is interesting to watch features from Lisp creep into the C community.
A decade ago I was saying that the mainstream language after 2000 is
likely to be an evolutionary descendent of C, but will actually be much
closer to lisp implementations of the 80's than to C implementations of
that era. I think the evolution of C++ has tended to support that
contention. (My regret is that there will be a lot of wasted motion in
the process, witness the C++ world reinventing garbage collection.)
Even Bjarne's defense of C++ expressed pleasure at the speed with which
some lispish features had made it into C++, and many of the implementers
of those features were in fact Lisp programmers trying to carry their
vision of a progamming environment into C++.
Lisp has the great advantage of being based on a firm mathematical
foundation. In the long run, the random walks taken by the C community
are likely to arrive at that same maximum in the space of programming
languages.
I remember reading a survey years ago when I was at Symbolics, but I
can't find it now.
I think that anything scientific is problematic at best, because you
really need a *big* project, and a large number of people, and how
can that be arranged?
How about somebody sponsoring a programming contest. A team of people
spend a weekend working on a problem, or a set of problems, free to
select what language and what computer they use.
I used to think that Lisp was 10x C (and I have no experience with C++)
but I now believe there is a range between 2x and 5x.
--
Christopher J. Vogt vo...@netcom.com
From: El Eh, CA
Anyway, I found the claims [that lisp is 2-10x more productive than C]
rather surprising, so here is a challenge. The following toy program
loads a 256x256 8 bit grayscale image from a file, inverts it, and saves
it to another file. I consider myself to be an intermediate-level C++
programmer. The program took me an hour (almost to the minute) to design,
code, debug and test (Ok, I didn't test it very much). I look forward to
seeing the previous posters' Lisp implementations of the same. It should
only take them 6 - 30 minutes to knock them together, less if they are
reasonably experienced in Lisp.
I am reasonably experienced in lisp. The following program took me 2
minutes and 49 seconds to design, code, debug and test. (I tested it about
as much as you did -- it runs fine on a single example). Most of that time
was spent looking up read/write-byte in the manual because I wasn't sure it
did exactly what I wanted -- it does.
I guess LISP is more like 20x more productive. And much easier to read,
understand and debug.
;; This reads a file of (height * width) 8 bit values, inverts them, and
;; writes them to another file. It will signal errors in pretty much the same
;; situations that the C program will.
(defun invert (infile outfile &key (height 256) (width 256))
(with-open-file (in infile)
(with-open-file (out outfile :direction :output)
(dotimes (x height)
(dotimes (y width)
(write-byte (- 255 (read-byte in)) out))))))
--
Lawrence Hunter, PhD.
National Library of Medicine
Bldg. 38A, 9th floor
Bethesda. MD 20894 USA
tel: +1 (301) 496-9300
fax: +1 (301) 496-0673
internet: hun...@nlm.nih.gov
encryption: RIPEM via server; PGP via "finger hun...@work.nlm.nih.gov"