Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why garbage collection?

65 views
Skip to first unread message

Richard Villanueva

unread,
Jan 13, 1996, 3:00:00 AM1/13/96
to
I understand the classical Lisp method of automatic garbage collection,
and it is very elegant. It reserves only one bit per cons cell (the
mark bit). However, on large systems, the long pause for garbage
collection is bad, so people look for more sophisticated methods.

My question is, why not plain old reference counts? Couldn't you
reserve one byte per cons for the count, and if the count reaches 255,
the cell becomes immortal? I know that circular lists are a problem,
but the system could easily find all CDR-loops, say when all other
space was exhausted.

In additon to being simpler and having fewer long pauses, this method
is probably faster in the long run in spite of the overhead for
tallying reference counts.

So am I missing something? Or are there other methods that are even
better?

--

+===============================================================+
| Richard Villanueva | Art and science cannot exist but in |
| San Diego, Calif. | minutely organized particulars. |
| rv...@netcom.com | -- William Blake |
+===============================================================+

Timothy Larkin

unread,
Jan 13, 1996, 3:00:00 AM1/13/96
to
> My question is, why not plain old reference counts?

From http://www.cs.rochester.edu/u/miller/ai-koans.html:

One day a student came to Moon and said, "I understand how to make a
better garbage collector. We must keep a reference count of the
pointers to each cons." Moon patiently told the student the
following story-

"One day a student came to Moon and said, "I understand how
to make a better garbage collector...

Erik Naggum

unread,
Jan 14, 1996, 3:00:00 AM1/14/96
to
[Richard Villanueva]

| I understand the classical Lisp method of automatic garbage collection,
| and it is very elegant. It reserves only one bit per cons cell (the
| mark bit). However, on large systems, the long pause for garbage
| collection is bad, so people look for more sophisticated methods.

there is no "long pause" in modern systems. numerous brilliant minds have
worked on garbage collection for many years. that is nearly a guarantee
that you will need to have in-depth knowledge of the prior art in garbage
collection techniques to be able to provide useful suggestions.

| My question is, why not plain old reference counts?

One day a student came to Moon and said, "I understand how to make a better


garbage collector. We must keep a reference count of the pointers to each
cons." Moon patiently told the student the following story-

"One day a student came to Moon and said, "I understand how to
make a better garbage collector...

(from the AI koans collection, found, among other places, at
http://www.cs.rochester.edu/u/miller/ai-koans.html.)

| So am I missing something? Or are there other methods that are even
| better?

I have misplaced the address of the mother of all sites on garbage
collection, but there is one that has an excellent survey, and there are
many papers available. the literature was sufficiently, um, extensive,
that I figured that I had work to do and news to read before I would embark
on this long journey. if my plan to become immortal succeeds, I'll
certainly study garbage collection. in the meantime, I know only that
anything I could come up with on my own is bound to be discarded as too
inefficient already, unless I luck out and strike 18 aces in a row.

one of the most well-known inefficient and wasteful techniques is reference
counting. most C++ programmers invent their own particularly ugly breed of
inefficient reference counting. this is why C++ is so popular -- the
probability that each and every C++ programmer will actually be able to
improve something in that language and feel great about it is exactly 1.

btw, your "one byte" stuff won't work. modern computers are no longer byte
adressable, but instead waste two or three bits of the precious machine
word and address range because some inferior languages think that byte
addressability is a win. the smallest efficiently addressable unit on RISC
processors is usually 4 bytes, sometimes 8 bytes. even on CISCs, you may
pay a hefty penalty for misaligning your data.

#<Erik 3030584967>
--
the problem with this "information superhighway" is mainly that if you ask
people to go play in it, they don't even understand when they get run over.

Richard Villanueva

unread,
Jan 14, 1996, 3:00:00 AM1/14/96
to
Erik Naggum (er...@naggum.no) wrote:

: there is no "long pause" in modern systems. numerous brilliant minds have


: worked on garbage collection for many years. that is nearly a guarantee
: that you will need to have in-depth knowledge of the prior art in garbage
: collection techniques to be able to provide useful suggestions.

I realize how unlikely it is that I would break new ground on the subject.
Hence my puzzlement. A friend of mine who has worked in support of AI
researchers for many years told me that garbage collection was one of the
obstacles that was hindering the acceptance of Lisp. This got me wondering.
I take it that he must be ill-informed.

: I have misplaced the address of the mother of all sites on garbage


: collection, but there is one that has an excellent survey, and there are
: many papers available.

Someone else provided the site info. I'll FTP it.

Jeff Dalton

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
In article <rvillDL...@netcom.com> rv...@netcom.com (Richard Villanueva) writes:
>I understand the classical Lisp method of automatic garbage collection,
>and it is very elegant. It reserves only one bit per cons cell (the
>mark bit). However, on large systems, the long pause for garbage
>collection is bad, so people look for more sophisticated methods.

These days, GC does not usually involve a long pause and is
probably more efficient than reference counts.

-- jd

Erik Naggum

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
[Richard Villanueva]

| I realize how unlikely it is that I would break new ground on the
| subject. Hence my puzzlement. A friend of mine who has worked in
| support of AI researchers for many years told me that garbage
| collection was one of the obstacles that was hindering the acceptance
| of Lisp. This got me wondering. I take it that he must be
| ill-informed.

most people are ill-informed about the issues of memory management in
general and garbage collection in particular. as I hinted: it is a very
complicated topic. e.g., people think that manual memory management is
always faster than garbage collection, and on top of this think that C's
malloc and free are time- and space-efficient. _none_ of this is true.
it is no accident that none of the standard Unix programs use dynamic
memory unless absolutely necessary, and instead are rife with arbitrary
(small) limits. the arguments used against garbage collections today
were probably used against dynamic memory as a whole not too long ago.

it is, however, very true that the acceptance of Lisp was hindered by a
particular brand of ill-informed misunderstanding, namely prejudice towards
automating one particular complex manual task. for some bizarre reason,
programmers have been taught to automate complex manual tasks and have
consistently made them several orders of magnitude faster, but memory
management is somehow exempt from this rule. why? of course, it isn't,
but automatic storage reclamation was perceived as wasteful ("what? there
are _dead_ objects out there, wasting _expensive_ memory?"), time-consuming
(it is actually much faster to pause infrequently (perhaps never) to do
garbage collection than to slow down the program all through its lifetime
with part-time garbage collection, but most people care about small and
immediate things, not big issues), even unnecessary ("relax, I can handle
this. besides, I know better what my program needs than some newfangled
automatic algorithm ever could."). some programmers also feel an acute
lack of control, and will argue about everything but their actual reasons
for not automating their memory management, preferring to micromanage
individual bytes instead of managing memory.

so your AI research supporter (?) was not ill-informed, but what he told
you was not what you thought you heard. garbage collection hindered the
acceptance, but garbage collection was not at fault, the _perception_ of
garbage collection was the actual cause. lack of understanding of what it
was made for a scape goat and an easy excuse. also, popularity and lack of
it are both self-propelling. (how often have you heard that somebody
doesn't want to learn Lisp because nobody uses it? that's why!)

| : I have misplaced the address of the mother of all sites on garbage
| : collection, but there is one that has an excellent survey, and there
| : are many papers available.
|
| Someone else provided the site info. I'll FTP it.

it is customary to supplement such vague information with harder
information if you get it. I would still like to know where it is.

#<Erik 3030654995>

Tim Bradshaw

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
* Erik Naggum wrote:

[Heavily edited]

> [Richard Villanueva]


> | My question is, why not plain old reference counts?

> one of the most well-known inefficient and wasteful techniques is reference
> counting.

Is this (inefficient) true? The Xerox Lisp machines had a
reference-counting GC which seemed to do OK. I remember it taking 10%
of the time or something, which seemed like a reasonable overhead.

--tim

Richard Villanueva

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
Erik Naggum (er...@naggum.no) wrote:

: so your AI research supporter (?) was not ill-informed, but what he told


: you was not what you thought you heard. garbage collection hindered the
: acceptance, but garbage collection was not at fault, the _perception_ of
: garbage collection was the actual cause.

Actually, my friend is a system and network administrator. I recently got
the free version of Allegro Common Lisp for Windows, and unless I am
horribly mistaken, I saw it pause for a painful length of time to do garbage
collection. If true, this either means that they are not using superior
algorithms that are well known, or garbage collection is an obstacle after
all. My comment about reference counts was not meant to say "listen to my
new idea", but only to say "even this old idea would be better than having
to endure such long pauses, so what's the deal?"

: | Someone else provided the site info. I'll FTP it.

: it is customary to supplement such vague information with harder
: information if you get it. I would still like to know where it is.

The garbage collection survey is at ftp.cs.utexas.edu, and the file is
/pub/garbage/gcsurvey.ps. It is a Postscript file, and I am forced to
confess that I do not know how to read or print it on my Windows machine.

David B. Lamkins

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
In article <rvillDL...@netcom.com>, rv...@netcom.com (Richard
Villanueva) wrote:


> Actually, my friend is a system and network administrator. I recently got
> the free version of Allegro Common Lisp for Windows, and unless I am
> horribly mistaken, I saw it pause for a painful length of time to do garbage
> collection. If true, this either means that they are not using superior
> algorithms that are well known, or garbage collection is an obstacle after

> all. [...]

ACL/Windows uses a generational garbage collector. However, the free
version limits total heap space to about 600K! In general, I don't think
GC works well if you push the limits of heap space. Also, ACL takes GC
parameters from an ALLEGRO.INI file -- they may simply be inappropriate
for such a small total heap space.

I use the full version of ACL/Windows, and rarely see (the cursor changes
to indicate GC in progress) a GC pause of more than a fraction of a second
on a Pentium 90.

Dave
--
CPU Cycles: Use them now or lose them forever...
http://www.teleport.com/~dlamkins/

Jim Veitch

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
Reference counting doesn't work because there is no way to deal with
circular structures that become garbage. I.e., a->b->a, etc.

Now most modern GC's are generation scavenging so they collect only
recent garbage and ignore old garbage (the heuristic is that only recent
allocations become garbage). This normally works pretty well, reducing
GC times on modern machines to times like 1/10 of a second or less.
It can work poorly, but this is unusual.

: so your AI research supporter (?) was not ill-informed, but what he told
: you was not what you thought you heard. garbage collection hindered the
: acceptance, but garbage collection was not at fault, the _perception_ of
: garbage collection was the actual cause.

Actually, my friend is a system and network administrator. I recently got


the free version of Allegro Common Lisp for Windows, and unless I am
horribly mistaken, I saw it pause for a painful length of time to do garbage
collection. If true, this either means that they are not using superior
algorithms that are well known, or garbage collection is an obstacle after

all. My comment about reference counts was not meant to say "listen to my
new idea", but only to say "even this old idea would be better than having
to endure such long pauses, so what's the deal?"

Allegro CL for Windows uses modern GC methods. I would guess that
if you are seeing big delays you may have not have enough memory and
you are seeing paging problems. 8MB of memory is not enough to
run the full development system (which is what you get in the free version)
very well. You really want 16 MB. If you are seeing problems with
16MB then you may have file I/O problems (e.g., network delays), or
least likely, but still possible, you really are seeing slow GC from
atypical usage of memory.

---------------------------------------------------------------
Jim Veitch Internet: j...@franz.com
Franz Inc., http: //www.franz.com/
1995 University Avenue, Phone: (510) 548-3600
Berkeley, CA 94704. FAX: (510) 548-8253
ACL Unix FAQ: ftp.uu.net:/vendor/franz/faq
ACL Windows FAQ: ftp.uu.net:/vendor/franz/acl4w-faq
---------------------------------------------------------------

Jeff Dalton

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
In article <rvillDL...@netcom.com> rv...@netcom.com (Richard Villanueva) writes:
>Erik Naggum (er...@naggum.no) wrote:
>
>: so your AI research supporter (?) was not ill-informed, but what he told
>: you was not what you thought you heard. garbage collection hindered the
>: acceptance, but garbage collection was not at fault, the _perception_ of
>: garbage collection was the actual cause.
>
>Actually, my friend is a system and network administrator. I recently got
>the free version of Allegro Common Lisp for Windows, and unless I am
>horribly mistaken, I saw it pause for a painful length of time to do garbage
>collection.

How long was the painful length of time? How do you even know
it was for garbage collection (rather than, say, for paging)?
If it's because the system printed a message to say it was
garbage collecting, would you have noticed if the message had
not been printed? (Try turning the message off.)

How often does this occur?

How much of the total time was spent in these painful pauses?

I haven't used Allegro Common Lisp for Windows, so I don't know what
its properties are. But I regularly compile moderately large systems
in Lisp without being bothered by gc pauses. I find GC a problem only
when I'm pushing near the limits of what the machine can handle in any
case.

It's true that some programs will spend lots of time garbage
collecting, just as some will spend lots of time paging.


>If true, this either means that they are not using superior
>algorithms that are well known, or garbage collection is an obstacle after
>all.

An obstacle to what?

>My comment about reference counts was not meant to say "listen to my
>new idea", but only to say "even this old idea would be better than having
>to endure such long pauses, so what's the deal?"

How do you know reference counting gives you better performance?

Perhaps you think it's obvious, because reference counting distributes
the collection costs throughout the computation, rather than doing it
all at once in a pause. But many GC algorithms also distribute
the work.

-- jd

Richard Villanueva

unread,
Jan 16, 1996, 3:00:00 AM1/16/96
to
I would just like to state that I reconstructed my previous
experiment in Allegro CL, and discovered that I made a mistake.
So I retract my statement that Allegro has slow garbage collection.

Jonas Kvist

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
Richard Villanueva wrote:
> Actually, my friend is a system and network administrator. I recently got
> the free version of Allegro Common Lisp for Windows, and unless I am
> horribly mistaken, I saw it pause for a painful length of time to do garbage
> collection.
[snipp]
I wouldn't draw the conclusion that the GC was to blame for a long pause when running under Windows.
As far as I am concerned, that pause might just as well be a "feature" from dear Microsoft. :)

Sorry if I'm a bit out of line,
/Jonas

--
------ <<<<<<< ((((((( OOOOOOOOOO ))))))) >>>>>>> ------
Jonas Kvist
Bjornkarrsgatan 13A:13 Phone: +46 (0)13 17 74 28
582 51 Linkoping E-mail: c93j...@und.ida.liu.se
Sweden URL: http://www-und.ida.liu.se/~c93jonkv/

Jeff Dalton

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
In article <rvillDL...@netcom.com> rv...@netcom.com (Richard Villanueva) writes:
>Erik Naggum (er...@naggum.no) wrote:
>
>: there is no "long pause" in modern systems. numerous brilliant minds have
>: worked on garbage collection for many years. that is nearly a guarantee
>: that you will need to have in-depth knowledge of the prior art in garbage
>: collection techniques to be able to provide useful suggestions.
>
>I realize how unlikely it is that I would break new ground on the subject.
>Hence my puzzlement. A friend of mine who has worked in support of AI
>researchers for many years told me that garbage collection was one of the
>obstacles that was hindering the acceptance of Lisp. This got me wondering.
>I take it that he must be ill-informed.

That (that he's ill-informed) does not follow.

GC _is_ an obstacle that hinders the acceptance of Lisp.

That doesn't mean it does so *for good reasons*.

(There may be some real-time applications where GC is still a
problem, though a Lisp program should be able to avoid generating
garbage if it's necessary to avoid it. But most of the time,
at least, modern garbage collectors are fast enough.)

-- jd


George J. Carrette

unread,
Jan 19, 1996, 3:00:00 AM1/19/96
to
rv...@netcom.com (Richard Villanueva) wrote:
> [Allegro Common Lisp, Microsoft Windows...]

>If true, this either means that they are not using superior
>algorithms that are well known, or garbage collection is an obstacle after
>all.

Or, more likely, the garbage collection region-size factors
have been improperly set for the amount of physical memory
and swap space on your system.

It means nothing that a program pauses for tens of seconds at
a time on Microsoft Windows. Heck, I have a Windows NT system
with 16MB of ram that pauses regularly with Microsoft Word,
and even groans with multiple copies of a popular commercial
Terminal Emulator running at the same time.

Garbage collection, done early enough, and often enough,
makes programs use up less physical memory, not more!

-gjc


Henry Baker

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
In article <rvillDL...@netcom.com>, rv...@netcom.com (Richard
Villanueva) wrote:

> I understand the classical Lisp method of automatic garbage collection,
> and it is very elegant. It reserves only one bit per cons cell (the
> mark bit). However, on large systems, the long pause for garbage
> collection is bad, so

One of these days, I'm going to go through a number of programs which
I know do garbage collection, but which don't advertise that fact, and patch
them to print out a message "pausing for GC...".

Then, I'm going to go through the programs which _do_ advertise the fact
that they use GC, and change their messages to say "pausing for domain name
service" or "pausing to refill disk cache" or "pausing to admire the
view".

I'm also going to find a program which does reference-counting, and insert a
print statement into its count-decrementing code which says "pausing for GC...".

The lesson I'd like to teach people is that most people don't really have a clue
as to why the systems they use are slow or fast, and that if a system
advertises that it is pausing for X reason, then people will buy systems
that don't use X, or at least don't advertise the fact that they use X.

(I have a non-technical lawyer friend who does _only_ word processing, but
is constantly worried about not having enough horsepower in their Pentium
machine to handle the load, and complains that it is still slow. I've
explained over and over again that the slowness they see has nothing to
do with the CPU power because it depends upon disk speed, memory size and
modem speed, but these facts seem to make no difference. At least if
this person's car engine were overpowering its tires, you'd see a lot of
spinning wheels and tire-smoke... :-)

----

The other thing I'd like to do is to ask the ACM to embargo any textbooks
that continue to prattle on about how bad GC is. Unfortunately, these
textbooks lend credence to the old adage "If you can, do; if you can't,
write a bad textbook".

----

In the late 1970's and early 1980's, several (now) well-know people from a
certain large commercial laboratory in New Jersey decided to give numerous talks
designed to purposely mislead their audience about the costs of garbage
collection vs. its alternatives. I can only speculate why they did this, but it
was one of the most successful big lies since Germany in the 1930's.

This mendacity has conservatively cost the U.S. software industry billions of
dollars in lost productivity, and this lie is only recently being exposed with
the development of new web languages. Unfortunately, the wide distribution of
these people's books means that another generation of SW people has
probably already been poisoned.

Numerous tests show that replacing a reference counting system with even
a relatively simple mark-sweep GC can often improve overall memory management
performance by a factor of 2-5X. But don't expect to read about this in
many popular programming books -- their authors are too busy parroting the
inaccuracies of others.

--
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html

Copyright (c) 1996 by Henry G. Baker. All rights reserved.
** Warning: Due to its censorship, CompuServe and its subscribers **
** are expressly prohibited from storing or copying this document **
** in any form. **

Cyber Surfer

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to
In article <hbaker-2201...@10.0.2.15>
hba...@netcom.com "Henry Baker" writes:

> Numerous tests show that replacing a reference counting system with even
> a relatively simple mark-sweep GC can often improve overall memory management
> performance by a factor of 2-5X. But don't expect to read about this in
> many popular programming books -- their authors are too busy parroting the
> inaccuracies of others.

This is why I wrote a simple (and experimental) Lisp to C compiler,
last year. It used a simple mark/compact GC, which I've been using
since the late 80s. The GC was originally written (by me) for use in
a Lisp interpreter, but the overhead from the interpreter didn't make
it easy to evaluate the GC/allocator.

The C code generated by my Lisp compiler wasn't the most efficient code
you'll ever see - far from it - but the performance was still pretty
good. I tested it at the extremes, one "benchmark" to generate a lot of
garbage, by endlessly copying a tree which grew larger with each copy,
and the other test just traversed the tree, once it had been built. I
was impressed by how fast _both_ tests were, and it looked to me like
the 2nd test wouldn't have been significantly faster if it had been
coded in C by hand.

My next plan is to replace the "Scheme" subset the compiler understood
into a pure functional Lisp, and see how many useful apps I can write
with it. I'd love to see someone's face, after they ask me which lamguage
I used to write an app in, when I tell 'em it was Lisp.

However, the real reason for developing such a compiler is simply to
avoid spending a lot of time coding in C. I like C, but it takes so
much time to code in it! ;-)
--
<URL:http://www.demon.co.uk/community/index.html>
<URL:http://www.enrapture.com/cybes/namaste.html>
Po-Yeh-Pao-Lo-Mi | "You can never browse enough."

Bruno Haible

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to
Jim Veitch <j...@Franz.COM> wrote:

> Reference counting doesn't work because there is no way to deal with
> circular structures that become garbage. I.e., a->b->a, etc.

This is not true.

Here is an algorithm which performs garbage collection of circular
structures in a reference counting system.

The reference count of an object in the heap is the number of pointers
out there pointing to the object. Each time a pointer is copied,
the reference count is incremented. Each time a pointer is overwritten,
the reference count is decremented.

The references which are stored outside the heap (on the stack etc.)
are called the roots. At the beginning of a GC, all objects which
are referenced by roots have to be marked. Instead of scanning the
roots (which may be highly unportable in a C or C++ environment),
we count the references from within the heap to a specific object,
and call this the "heaprefcount" of the object. If the "heaprefcount"
is less than the usual "refcount", there are roots pointing to the
object, and we consider the object "marked". If the "heaprefcount"
and the "refcount" are equal, we consider the object "unmarked".
Now the algorithm proceeds by marking objects which are pointed to
by already marked objects. As usual, objects which are not marked
at the end of this process are garbage and can be recycled.

In detail, every object has the fields "refcount" and "heaprefcount".
The GC algorithm proceeds in four steps:

Step 1. Walk through the heap and set all heaprefcounts to 0.

Step 2. Walk through the heap, and for every pointer you encounter,
increment the heaprefcount of the object pointed to.

Step 3. During this step, every object can be in one of three states:
(r) refcount > heaprefcount >= 0: object is referenced by a root.
(u) refcount = heaprefcount: object is (yet) unmarked.
(m) heaprefcount = -1: object is marked.
Walk through the heap, and for every object in state (r), mark it.
Marking an object means (recursively): Do nothing if it is already
in state (m). Else put it into state (m) and mark all objects it points
to.

Step 4. Walk through the heap, freeing all objects in state (u).
This may reduce the refcount of other objects, of course, but not
down to 0.


As you have seen, this algorithm doesn't need any knowledge about the
"root"s. It is therefore ideal for the implementation of embedded
[bytecode] interpreters, e.g. Lisp as an extension language.


Bruno Haible

----------------------------------------------------------------------------
Bruno Haible net: <hai...@ilog.fr>
ILOG S.A. tel: +33 1 4908 3585
9, rue de Verdun - BP 85 fax: +33 1 4908 3510
94253 Gentilly Cedex url: http://www.ilog.fr/
France url: http://www.ilog.com/

Kelly Murray

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to j...@franz.com
> hai...@ilog.fr (Bruno Haible) wrote:

> >Jim Veitch <j...@Franz.COM> wrote:
> > Reference counting doesn't work because there is no way to deal with
> > circular structures that become garbage. I.e., a->b->a, etc.

> This is not true.

Yes it is, unless you consider the "way to deal" with it is
add another algorithm beside reference counting...

> Here is an algorithm which performs garbage collection of circular

> structures in a reference counting system....


> The references which are stored outside the heap (on the stack etc.)
> are called the roots. At the beginning of a GC, all objects which
> are referenced by roots have to be marked. Instead of scanning the
> roots (which may be highly unportable in a C or C++ environment),
> we count the references from within the heap to a specific object,

This is not reference counting --- you're adding a mark-n-sweep
phase to the GC, where the mark is another reference count.

-Kelly Murray k...@franz.com http://www.franz.com

Bruno Haible

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
Prof. Henry Baker <hba...@netcom.com> wrote:
>
> The other thing I'd like to do is to ask the ACM to embargo any textbooks
> that continue to prattle on about how bad GC is. Unfortunately, these
> textbooks lend credence to the old adage "If you can, do; if you can't,
> write a bad textbook".

Please start with the textbook "C++ for C programmers", by Ira Pohl,
2nd edition. On page 268, he writes:

" Other complexity issues are fundamental to the C++ language design,
such as the lack of garbage collection (GC). Several proposals exist
[4][5][6], and their implementations support the contention that they
can be done without degrading performance in most applications. Most
other major OOP languages, such as Smalltalk, CLOS, and Eiffel, support
GC. The argument for GC is that it makes the programmer's task distinctly
easier. Memory leaks and pointer errors are common when each class
provides for its own storage management. These are very hard errors to
find and debug. GC is a well-understood technology, so why not?

" The argument against GC is that it extracts a hidden cost from all
users when employed universally. Also, GC manages memory but not other
resources. This would require destructors for _finalization_.
Finalization is the return of resources and other behavior when an
object's lifetime is over. For example, the object might be a file,
and finalization might require closing the file. Finally, it is not
in the tradition of the C community to have free store managed
automatically."

[4] Hans-J. Boehm and Mark Weiser. "Garbage Collection in an Uncooperative
Environment." Software - Practice and Experience, Sept. 1988,
pp. 807-820.

[5] Daniel Edelson and Ira Pohl. "A Copying Collector for C++." In
Usenix C++ Conf. Proc. 1991, pp. 85-102.

[6] Daniel Edelson. "A Mark and Sweep Collector for C++." In Proc. Princ.
Prog. Lang., January 1992.


Just look at the technical strength of the argument that GC is not
"in the tradition of the C community"...

Bruno Haible


Cyber Surfer

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
In article <4eae5s$6...@nz12.rz.uni-karlsruhe.de>
hai...@ma2s2.mathematik.uni-karlsruhe.de "Bruno Haible" writes:

> Just look at the technical strength of the argument that GC is not
> "in the tradition of the C community"...

Yeah, I love it. ;-)

BTW, I've been asked to review a GC for C++, so I guess now is a
good time to grab a few authoritive documents on the subject, like
the docs at cs.utexas.edu:/pub/garbage/ or the files in Henry
Baker's ftp space. I'll also have to find a Ghostscript viewer
(preferably for NT), as I don't currently have a Postscript viewer,
unless you count a laser printer. In this case, I don't.

Mind you, I'm very happy with a mark/compact GC, and I found one
in a computer science book, Fundamentals of Data Structures, by
E Horowitz and S Sahni. While they're not anti-GC, they refer to
Knuth and his belief that specialist languages such as Lisp and
SNOBOL are not necessary, and that list and string processing can
be done in any language. The languages that seem to have interested
them tend to be PL/I, Pascal, and Fortran. Not at all like Lisp.

The mark/compact algorithms they give are obviously not the best
available today, but they are at least simple enough to implement
easily. For programmers uncomfortable with relocating possible
every pointer in a heap any time the GC runs, this could be
important. I was suprised to find that when I coded a GC based on
their algorithms in C, they worked first time. I've been using
that code for the last 8 years without trouble, and with what I
find to be acceptable performance.

It was published in 1976, so its one of a number of books from
around that time which I love. Sadly, many of the techniques in this
particular book seem to have been forgotten. Is anyone still using
coral rings, multilists and inverted multilists? Have these data
structures become "obselete", like reference counting, cylinder-surface
indexing, etc?

Anyway, we can be sure that garbage collectors will be around for
a while yet, as a fair number of popular (that's probably a relative
issue - um, relative to VB or Perl? (-; ) language using a GC of
some kind are still kicking around. I think that Henry Baker made
a very good point about how people perceive delays in software.

I'm currently seeing very long delays when I try to access most
Internet sites in the US. :-( Still, that'll improve eventually.
Then I may be able to grab some useful files, like some GC docs. ;-)
Meanwhile, I should read the Java docs I have here, as I'll be
using that soon. Doesn't Java use a GC...? I think it does!

Po-Yeh-Pao-Lo-Mi | "You can never GC enough."

Tim Hollebeek

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
Cyber Surfer (cyber_...@wildcard.demon.co.uk) wrote:

: It was published in 1976, so its one of a number of books from


: around that time which I love. Sadly, many of the techniques in this
: particular book seem to have been forgotten. Is anyone still using
: coral rings, multilists and inverted multilists? Have these data
: structures become "obselete", like reference counting, cylinder-surface
: indexing, etc?

Reference counting is obsolete? I better go rush over and tell that
to all the C++ programmers who use it extensively because they don't
have GC in the language :-) Heck, Byte even published an article on
how to write a refcounting pointer class in the last few months. Not
that Byte is exactly at the Leading Edge of programming, but it does
show that plenty of Real World programmers still do things that way.

Whether they _should_ is another story :-)

--
Tim Hollebeek | Everything above is a true statement, for sufficiently
PChem Grad Student | false values of true.
Princeton Univ. | t...@wfn-shop.princeton.edu
-------------------| http://wfn-shop.princeton.edu/~tim

Richard Pitre

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
>
> This mendacity has conservatively cost the U.S. software industry billions of
> dollars in lost productivity, and this lie is only recently being exposed
with
> the development of new web languages. Unfortunately, the wide distribution
of
> these people's books means that another generation of SW people has
> probably already been poisoned.
>
(Warning: rambling sarcastic dementia)
But now, thank goodness, it is common knowledge that "AI" is a "BAD THING" and
somebody( probably that Mr. Godel feller) proved that it "COULDN'T BE DONE" or
something like that. Nobody understands what Mr. Godel proved but its pretty a
pretty damned sure thing that he did prove that it doesn't matter if we do
understand what anyone else proves because proving things is really no big deal
anyway. This is especially true for proving things about computers because
those scientist types know nothing about money or sex. Greater minds have
informed me that since AI is a BAD THING then LISP, by implication( This
"implication" makes no sense to me but they tell me that you have to use the
"new logic" to understand it. In any case, I've lost my mind and can't do that
thinking thing too well anymore. I'm so bad off that the computer I own has
almost no relationship to my libido anymore. The friends that I used to have
when I could think feel sorry for me. I have an old elisp flame program that
wrote this post.), is a BAD THING too.

One of the greater revelations which has come from today's computer/market
science is that this software stuff and these microprocessors are just so
darn complicated that we have to live with bugs. Oh well, I guess thats what
insurance is for. Now we just have to get on with the tough work of making
money from flawed products that people will pay for even with that
understanding. Better yet, now that we understand that those old timey
languages are BAD we can kill whole forests to publish books on state of the
art languages like PERL(Proselytization Egregiously Relentlessly and
Licentiously). (I bought one of those 4 inch thick books on PERL and I couldn't
understand what it had to do with my computer so now I use the book to hunt
with. At 20 meters per second any animal will perish if hit.) We can turn an
XRAY machine into a human toaster by losing our C++ pointers. And then there is
Visual Basic(I faced the northwest and kissed the floor after typing out the
sacred name of The Ideal Programming Language by the company which has the one
true vision for the future of computing technology for AMERICA (land of the
rugged individual type)).

When the latest state of the art "operating system" loses files or locks up on
me I just figure that I messed up. After all they did fix
that once in a millenium microcode bug in my processor and my OS manufacturer
had all those master beta testers hammering away on it before they sold it to
me so that anything that goes wrong has to be my own fault.

I love this latest program that I bought. Its really pretty with animated icons
and music and sometimes it helps me find my files. It tracks my investments in
bridges. Soon as I get back from my XRAY appointment I'm going to buy another
one.

richard

Vlastimil Adamovsky

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:

>Just look at the technical strength of the argument that GC is not
>"in the tradition of the C community"...

The technical strength of the argument is that C++ is still low level
language with a high level language advantage and as such one it
should do the all low level operations itself and if it is needed, it
should implement the high level features by using its own low level
implementations.


*******************************************
* Vlastimil Adamovsky *
* Smalltalk, C++ and Envelop development *
*******************************************


Omar Othman

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
Hi. C++ers.

This question keeps on bothering me. I know it can be done. I just need
some example to get through the concept:

If I want to contain a lot of object with different attribute with one
array or tree so that I can walk through all of them one by one. How can
I do it? If you answer is too long, please point me to a book with some
examples.

thanks a lot.

you can post or send a e-mail to JY...@simsci.com

Jerry

Cyber Surfer

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
In article <4eb75f$l...@cnn.Princeton.EDU> tim@franck "Tim Hollebeek" writes:

> Cyber Surfer (cyber_...@wildcard.demon.co.uk) wrote:
>
> : It was published in 1976, so its one of a number of books from
> : around that time which I love. Sadly, many of the techniques in this
> : particular book seem to have been forgotten. Is anyone still using
> : coral rings, multilists and inverted multilists? Have these data
> : structures become "obselete", like reference counting, cylinder-surface
> : indexing, etc?
>
> Reference counting is obsolete? I better go rush over and tell that
> to all the C++ programmers who use it extensively because they don't
> have GC in the language :-) Heck, Byte even published an article on
> how to write a refcounting pointer class in the last few months. Not
> that Byte is exactly at the Leading Edge of programming, but it does
> show that plenty of Real World programmers still do things that way.

I stopped reading Byte a few years ago, easily 5 years after they
lost interest in programmers. ;-) If you want to know about basic
garbage collecting, I'd recommend a computer science book, as it'll
probably be more up to date. Beware of many programming books, as
I've lost count of the number of tutorials that use examples without
much (if any) error checking. _That's_ a very bad way to code!



> Whether they _should_ is another story :-)

Do Real World programmers know anything about computer science?
I dunno. "If it works, it must be ok" could be enough for some.
I'm still using code I wrote 10 years ago, so I need to take a
little bit more care...

Po-Yeh-Pao-Lo-Mi | "You can never browse enough."

Stefan Monnier

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <4ecmfo$a...@news2.ios.com>,
Vlastimil Adamovsky <vl...@gramercy.ios.com> wrote:

] hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:
] >Just look at the technical strength of the argument that GC is not
] >"in the tradition of the C community"...
] The technical strength of the argument is that C++ is still low level
] language with a high level language advantage and as such one it
] should do the all low level operations itself and if it is needed, it
] should implement the high level features by using its own low level
] implementations.

You mean like the virtual table stuff that could be implemented with low-level
operations but instead is well hidden ? At least well enough to make C++-objects
very inconvenient to represent any low-level data-structure.

Little example: the vtable can be considered a class-pointer, but C++ has been
well enough designed to make it impossible for you to explicitely add any kind
of info to the vtable.

C++ is not low-level enough in this respect.
Furthermore, C++ is too low-level to make it reasonable to implement a copying
GC, thanks to all the nasty casts the programmer might feel like doing.

Basically, C++ is a mix between low and "high"-level language and I'm not
sure it's a good idea, since its high-level features are not really useable
when you want to use the low-level features and the low-level features make it
hard to take advantage of several aspects of the high-level features.


Stefan

Bruno Haible

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
Jim Veitch <j...@Franz.COM> wrote:
>> > Reference counting doesn't work because there is no way to deal with
>> > circular structures that become garbage. I.e., a->b->a, etc.

I replied:
>> This is not true.

Kelly Murray <k...@franz.com> says:
> Yes it is, unless you consider the "way to deal" with it is
> add another algorithm beside reference counting...

> ...


> This is not reference counting --- you're adding a mark-n-sweep
> phase to the GC, where the mark is another reference count.

Viewing it this way, you are right. Of course the traditional, unmodified
reference counting scheme is not able to reclaim circular structures.
If I misunderstood Jim Veitch's point, I happily eat my words.

My point was just that reference counting and mark-n-sweep can be combined.


Bruno Haible email: <hai...@ilog.fr>
Software Engineer phone: +33-1-49083585

Marco Antoniotti

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <4ei4og$l...@info.epfl.ch> "Stefan Monnier" <stefan....@lia.di.epfl.ch> writes:

This is probably off-track, but, as a diversion, please bear this last
gripe of mine on the language which is going to be swapped away by
Java (no, it is not Lisp :) ).

One of the things that bothered me most with C++, was this sort of
"newspeak" which it introduced. For years people had been working in
Flavors, Clos, Smalltalk etc, and they pretty much shared a common
terminology. Then suddendly, we did not have "methods" any more, we
had "member functions", we lost the "inheritance" (pun intended) and
started "deriving classes".

Of course, the argument is that C++ wanted to "clarify" such things
and the choice of new terminology was a "good thing".

Well, I must say that I am very pleased to see that Java somewhat
reintroduced the "old" terminology and that Lisp, (as well as Dylan)
is not yet dead.

Half seriously yours
--
Marco Antoniotti - Resistente Umano
===============================================================================
International Computer Science Institute | mar...@icsi.berkeley.edu
1947 Center STR, Suite 600 | tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA | +1 (510) 642 4274 x149
===============================================================================
...it is simplicity that is difficult to make.
...e` la semplicita` che e` difficile a farsi.
Bertholdt Brecht

Richard Pitre

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <4eae5s$6...@nz12.rz.uni-karlsruhe.de>

Everyone understands that its a "good thing" to reduce the effort required to
generate and maintain a program and everyone appreciates the fact that bugs are
a "bad thing" especially when they cannot be overcome by statistical
arguments(BS), or legal, marketing and insurance departments. But, until
everyone appreciates the actual cost of software development and maintenance
and until everyone has a close relative who has suffered grevious bodily harm
because of a software error then originless destination free discussions, like
the one in this book, will continue. What worries me is that, at least in the
good ole US of A, there is precedence for a solution to the software quality
problem based on marketing, insurance and legal mechanisms. This type of
solution does not necessarily promote technological innovation in computer
products or a need for any more computer science.

If you prioritize speed over quality and reliability in the right way then you
can have all your code written in C or C++ on a small budget. In time, software
optimization technology will remove this problem. Otherwise only critical code
at the bottom layers of software systems can sometimes be written in C or C++.
Even so, for small enough amounts of code the advantages of C++ over assembler
are, in my incredible mind, questionable because you have to rely on one of
todays compilers and because C++ is complex. Todays compilers are written in a
psychological ambiance containing a special "warm fuzzy". This "warm fuzzy"
suggests that the inevitability of bugs implies that we don't have to worry
too much about generating them or developing serious fundamental technology to
avoid them. This insanity has progressed to the point that many people would
casually acknowledge that their cpu may have documented and undocumented bugs.
To the extent that software consumers are insensitive to differences in
quality, only junk will survive in the marketplace. Based on my recent
shopping extravaganzas there is a serious need for that kind of sensitivity
training in the software consumming public.

richard


Marco Antoniotti

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <DLywC...@research.att.com> b...@research.att.com (Bjarne Stroustrup <9758-26353> 0112760) writes:

From: b...@research.att.com (Bjarne Stroustrup <9758-26353> 0112760)
Newsgroups: comp.lang.lisp,comp.lang.c++
Date: Tue, 30 Jan 1996 00:07:29 GMT
Organization: Info. Sci. Div., AT&T Bell Laboratories, Murray Hill, NJ
Lines: 66
Xref: agate comp.lang.lisp:20712 comp.lang.c++:172072

mar...@lox.icsi.berkeley.edu (Marco Antoniotti) write.

> [discussion about C++ by someone else]


>
> This is probably off-track, but, as a diversion, please bear this last
> gripe of mine on the language which is going to be swapped away by
> Java (no, it is not Lisp :) ).

You are indeed way off track, and I think the announcements of C++'s
imminent demise are rather premature.

Well, I think so too.

> One of the things that bothered me most with C++, was this sort of
> "newspeak" which it introduced. For years people had been working in
> Flavors, Clos, Smalltalk etc, and they pretty much shared a common
> terminology. Then suddendly, we did not have "methods" any more, we
> had "member functions", we lost the "inheritance" (pun intended) and
> started "deriving classes".

I think you have your dates wrong. The C++ terminology was picked in
1979. Then, the work on CLOS hadn't yet started, Smalltalk-80 hadn't
been completed, and its predecessor was not well known outside a small
circle of researchers. I don't recall the dates for Flavors and Loops,
but again these languages were not known outside the AI community for
quite a few years.

The C++ terminology is based on that of Simula (1967) and partly on that
of C (1972). The base- and derived class terminology was indeed invented
for C++ - based on rather negative experience teaching using the Simula
super- and subclass terminology.

A good source for dates and other historical facts about these languages is:

Preprint of Proc. ACM History of Programming Languages
Conference (HOPL-2).
April 1993.
ACM SIGPLAN Notices, March 1993.

The C++ paper there is

Stroustrup: The History of C++: 1979-1991.

A more thorough description of the design of C++ is:

Stroustrup: The Design and Evolution of C++.
Addison-Wesley. ISBN 0-201-54330-3.

> Of course, the argument is that C++ wanted to "clarify" such things
> and the choice of new terminology was a "good thing".

You got the motivation wrong as well. There wasn't an accepted terminology
to "clarify." I stuck to the most widely used terminology at the time
(Simula's) as far as I could, and introduced new terms that fitted that
and the terminology of C only where I saw no alternative.

> Well, I must say that I am very pleased to see that Java somewhat
> reintroduced the "old" terminology and that Lisp, (as well as Dylan)
> is not yet dead.
>
> Half seriously yours

It is a good idea to be at least half accurate even if only half serious.

Of course (this time seriously), I cannot contest Bjarne's account of
the history of C++ and the motivations that lead him to the choices
he made. To my partial justification I can only say that the earliest
document of Bjarne's on C++ that was widely available dates to 1983
("Adding Classes to C..." SW Practice and Experience" 13), and that I
doubt that many people actually saw the AT&T C++ preprocessor until
87/88 (I might, of course, be wrong on this.) Of course not many
people had a Xerox or a Symbolics to play around either.

What I find very interesting in Bjarne's post is the reference to
Simula. I never programmed in it and have only memories of the
chapters on Ghezzi's book (1st edition). It would seem that my
accusation of inventing a "newspeak" must then fall on the
Smalltalk/Loops/Flavors people. :)

Cheers

Bjarne Stroustrup <9758-26353> 0112760

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to

mar...@lox.icsi.berkeley.edu (Marco Antoniotti) write.

> [discussion about C++ by someone else]
>
> This is probably off-track, but, as a diversion, please bear this last
> gripe of mine on the language which is going to be swapped away by
> Java (no, it is not Lisp :) ).

You are indeed way off track, and I think the announcements of C++'s
imminent demise are rather premature.

> One of the things that bothered me most with C++, was this sort of

- Bjarne

Jeff Dalton

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <822675...@wildcard.demon.co.uk> cyber_...@wildcard.demon.co.uk writes:
>In article <4eae5s$6...@nz12.rz.uni-karlsruhe.de>
> hai...@ma2s2.mathematik.uni-karlsruhe.de "Bruno Haible" writes:
>
>> Just look at the technical strength of the argument that GC is not
>> "in the tradition of the C community"...
>
>Yeah, I love it. ;-)

But it _is_ true that GC is not in the tradition of the C community.
The argument that it's a "hidden cost" is key here. C programmers
feel that they know what everything will do in machine terms, and
to a fair extent they are right. (That's so despite a number of
difficulties and exceptions.)

So when a allocation might do lots of collecting as well (or
whatever), and you don't really know when, that seems to move
C into the higher-level / less-in-touch-with-the-machine camp.

>Mind you, I'm very happy with a mark/compact GC, and I found one
>in a computer science book, Fundamentals of Data Structures, by
>E Horowitz and S Sahni. While they're not anti-GC, they refer to
>Knuth and his belief that specialist languages such as Lisp and
>SNOBOL are not necessary, and that list and string processing can
>be done in any language. The languages that seem to have interested
>them tend to be PL/I, Pascal, and Fortran. Not at all like Lisp.

Well, surely it's true that list and string processing can be done
in (almost) any language. I've done list processing in Basic, for
instance. (Good Basics can, of course, do strings, so that's not
interesting.)

But there's a difference between a language is not necessary
and saying it's not valuable, or not worth having and using.
I'm not sure when Knuth stated this belief, but such points had
a different role in the past then they tend to do today,
because it was not so widely known that, or how, you could
do list or string processing.

A similar thing today (or maybe a few years back) might be to
point out that you could do object-oriented programming in
(almost) any language.

-- jd

Richard Pitre

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4ei4og$l...@info.epfl.ch> "Stefan Monnier"
<stefan....@lia.di.epfl.ch> writes:
> In article <4ecmfo$a...@news2.ios.com>,
> Vlastimil Adamovsky <vl...@gramercy.ios.com> wrote:
> ] hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) wrote:
> ] >Just look at the technical strength of the argument that GC is not

> ] >"in the tradition of the C community"...
> ] The technical strength of the argument is that C++ is still low level
> ] language with a high level language advantage and as such one it
> ] should do the all low level operations itself and if it is needed, it
> ] should implement the high level features by using its own low level
> ] implementations.
>
> You mean like the virtual table stuff that could be implemented with
low-level
> operations but instead is well hidden ? At least well enough to make
C++-objects
> very inconvenient to represent any low-level data-structure.
>
> Little example: the vtable can be considered a class-pointer, but C++ has
been
> well enough designed to make it impossible for you to explicitely add any
kind
> of info to the vtable.
>
> C++ is not low-level enough in this respect.
> Furthermore, C++ is too low-level to make it reasonable to implement a
copying
> GC, thanks to all the nasty casts the programmer might feel like doing.
>
> Basically, C++ is a mix between low and "high"-level language and I'm not
> sure it's a good idea, since its high-level features are not really useable
> when you want to use the low-level features and the low-level features make
it
> hard to take advantage of several aspects of the high-level features.
>
C++ is complex and so is CLOS. OOP was supposed to give programmers a handle on
complexity but I'm not sure that the complexity overhead is either necessary or
viable. OOP may be a paradigm that is only almost right.

Cyber Surfer

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
In article <s08spgx...@lox.ICSI.Berkeley.EDU>
mar...@lox.icsi.berkeley.edu "Marco Antoniotti" writes:

> Of course (this time seriously), I cannot contest Bjarne's account of
> the history of C++ and the motivations that lead him to the choices
> he made. To my partial justification I can only say that the earliest
> document of Bjarne's on C++ that was widely available dates to 1983
> ("Adding Classes to C..." SW Practice and Experience" 13), and that I
> doubt that many people actually saw the AT&T C++ preprocessor until
> 87/88 (I might, of course, be wrong on this.) Of course not many
> people had a Xerox or a Symbolics to play around either.

Has this changed recently? I don't think so. Most people still think
I'm strange for wanting to use Lisp, and some find it hard to believe
that it can be used for a machine with 32 MB of RAM and running NT.
If that kind of machine _can't_ run Lisp, what can? Well, it's a
matter of perception, as it _can_ run Lisp, even a fairly large Lisp
like Allegro CL for Windows. However, very few people know this.

I doubt that many non-Lisp programmers/users have even heard of
Symbolics, never mind used one. The numbers will be insignificant
compared to the vast hordes of people who think that Lisp is dead.
C++ is very much alive and kicking, and a great many people know it.

If you made a reference to CFRONT above, then I think you may have
made a mistake. My understanding is that it is a _compiler_. It may
well produce C code, but there are Lisp to C compilers that do the
same thing, but they're not preprocessors either. My understanding
of Ratfor is that it was only "interested" 10% of the code that ran
thru it, so I'll feel safe calling it a preprocessor. If it was
"interested" in 100% of the code, then I'd call it a compiler.



> What I find very interesting in Bjarne's post is the reference to
> Simula. I never programmed in it and have only memories of the
> chapters on Ghezzi's book (1st edition). It would seem that my
> accusation of inventing a "newspeak" must then fall on the
> Smalltalk/Loops/Flavors people. :)

Another case of "not invented here", I think. ;-) Perhaps we should
first define what we mean by the word "compiler", but we probably
all know what we mean by it...don't we? I dunno.

We could just be arguing over fine points that most programmers don't
have time for, never mind care about. They could be the people using
C++, coz it's there, and everybody knows it. Sometimes the only factor
that counts is the cost ("Lisp programmers know the value of everything,
and the cost of nothing", in which case, "C++ programmers know the
cost of everything, and the value of nothing"), and a typical C++
development system will cost about $300.

So, it's there and its dirt cheap. No wonder so many people use it.

Vlastimil Adamovsky

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
"Stefan Monnier" <stefan....@lia.di.epfl.ch> wrote:

Good Morning!!!!!

>You mean like the virtual table stuff that could be implemented with low-level
>operations but instead is well hidden ? At least well enough to make C++-objects
>very inconvenient to represent any low-level data-structure.

>Little example: the vtable can be considered a class-pointer, but C++ has been
>well enough designed to make it impossible for you to explicitely add any kind
>of info to the vtable.

Why would you need to modify the vtable. Create a pointer to your
table and modify it as you wish. By the way, can you modify C -
language compiler so it will behave as YOU want to?

>C++ is not low-level enough.....
>Furthermore, C++ is too low-level.....
Make your decision....

>Basically, C++ is a mix between low and "high"-level language and I'm not
>sure it's a good idea, since its high-level features are not really useable
>when you want to use the low-level features and the low-level features make it
>hard to take advantage of several aspects of the high-level features.

I am sure you have found some strange things. Can you document your
words?

Marco Antoniotti

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to

Ok. I made a mistake! I forgot the customary warning

":) :) FLAME BAIT AHEAD :) :)"

In my original posting.

But since we are at it, I want to complete the list of penances that
have been required from me. (Please assume the warning is repeated here).

1 - C/C++ is the dominant language right now (as COBOL and FORTRAN
were some time ago).

2 - Java is implemented in C and one of its design goals (crf the
"White Paper") was to be "as familiar as possible to C/C++ programmers".
(a lesson that maybe the Dylan developers should have taken into account)

3 - Common Lisp is dead.

Having said so I will slowly type my parenthesis and 'defun' until I
won't be forced by the powers that be to chase pointer leaks around my
program. :)

In article <4eqh8l$c...@news2.ios.com> vl...@gramercy.ios.com (Vlastimil Adamovsky) writes:

From: vl...@gramercy.ios.com (Vlastimil Adamovsky)
Newsgroups: comp.lang.lisp,comp.lang.c++
Date: Thu, 01 Feb 1996 14:12:58 GMT
Organization: Internet Online Services
Lines: 47
NNTP-Posting-Host: ppp-32.ts-7.hck.idt.net
X-Newsreader: Forte Free Agent 1.0.82
Xref: agate comp.lang.lisp:20730 comp.lang.c++:172436

mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:

>This is probably off-track, but, as a diversion, please bear this last
>gripe of mine on the language which is going to be swapped away by
>Java (no, it is not Lisp :) ).

I wonder in what language is Java language implemented? Why is Java a
"simplified" C++ (according to the creators)?
Because for some people the programming language has to be simple,
otherwise they are not able to work with it. But they are able to
write thick books about its dead.

...

>Half seriously yours
No kidding...


>International Computer Science Institute | mar...@icsi.berkeley.edu

> ...it is simplicity that is difficult to make.

Yes, and for some people it is difficult to deal with thing that are
little more complicated than the simple one....

*******************************************
* Vlastimil Adamovsky *
* Smalltalk, C++ and Envelop development *
*******************************************

I assume that Vlastimil advocates the use of INTERCAL as the proper
language for Smalltalk and C++ environments construction. :)

Happy programming to everybody, whatever language you use!

Vlastimil Adamovsky

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:

>This is probably off-track, but, as a diversion, please bear this last
>gripe of mine on the language which is going to be swapped away by
>Java (no, it is not Lisp :) ).

I wonder in what language is Java language implemented? Why is Java a
"simplified" C++ (according to the creators)?
Because for some people the programming language has to be simple,
otherwise they are not able to work with it. But they are able to
write thick books about its dead.

>One of the things that bothered me most with C++, was this sort of


>"newspeak" which it introduced. For years people had been working in
>Flavors, Clos, Smalltalk etc, and they pretty much shared a common
>terminology. Then suddendly, we did not have "methods" any more, we
>had "member functions", we lost the "inheritance" (pun intended) and
>started "deriving classes".

Methods in Smalltalk are implementations for a code to be executed
when a message has been sent and the selector (that identifies the
message ) will pick-up this this code. In Smalltalk you are "SENDING"
messages, thus messages are not functions, it follows that member
functions are not methods. You don't send functions, that's why you
have to name them by the right name: MEMBER FUNCTIONS.

>Of course, the argument is that C++ wanted to "clarify" such things
>and the choice of new terminology was a "good thing".

Your comment did clarify it?

>Well, I must say that I am very pleased to see that Java somewhat
>reintroduced the "old" terminology and that Lisp, (as well as Dylan)
>is not yet dead.

Is the "old" terminology more effecient than the "new" one? Do you
have some benchmarks?

Daniel Barlow

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <823078...@wildcard.demon.co.uk>,

Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>C++, coz it's there, and everybody knows it. Sometimes the only factor
>that counts is the cost ("Lisp programmers know the value of everything,
>and the cost of nothing", in which case, "C++ programmers know the
>cost of everything, and the value of nothing"), and a typical C++
>development system will cost about $300.

I don't know lisp. I've done little bits of elisp (but I understand that
that Doesn't Count) and I've played with scheme (actually guile), but
that's about all. I'd like to learn (curiosity value) but I'm
holding off until I can think of something to write in it. And until
I have more time.

The limiting factor is surely not cost. I have gcc and gcl on my
computer; they were both entirely free. In fact, I understand that gcl
comes precompiled as part of the popular Slackware Linux distribution.
I know of approximately one linux user who actually installed it (except
by accident). Why the low takeup?

One possible consideration (at least among the unixheads that I swap
opinions with) is that the only lisp most people on unix see is emacs.
And emacs is big, slow, and stops regularly to tell you it's
`garbage collecting'. This might not be representative of GC in general,
but I bet a lot of people think it is.

--
Web: http://www.sjc.ox.ac.uk/users/barlow Mail: daniel...@sjc.ox.ac.uk

panic("bad_user_access_length executed (not cool, dude)");


Richard Pitre

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
> >> Just look at the technical strength of the argument that GC is not
> >> "in the tradition of the C community"...
> >
You can do anything in any of these languages but at what personal cost?
The cost of GC is not so deeply hidden as the implicit costs of using C.
I believe that its really a question of when to use which tool. I program a
fair bit in C. I think that ANSI C is a really big deal because you can buy
reliable implementations that act almost exactly like a small book says that it
will and its a complete tool. Like most surviving C programmers, I have
developed my own style which offers me a degree of protection from my own
particular dislexias(sp). I began programming in C when I got tired of trying
to write what were effectively C-like macros to manipulate stacks and the like
in assembler. I redid my theory of personal economics and came to the
conclusion that $80 was a cheap way to buy back a lot of my time. Initially I
found that C is sneaky and I made serious errors that ground away at me. Right
away I tried to make a data structure with built in memory managment. At one
low point I woke up with my face in a puddle of drool on the desk next to my
keyboard. I made up stories about the C-ward at the hospital. (More stories: I
bought some chickens and I would kill one and sprinkle its blood on the
keyboard before each major test run. I put an X on my cpu housing over the
place where the cpu chip was and kept a loaded gun nearby). With a little
thought I could accurately guess what assembler code the compiler would
generate but I still made serious errors. I had come to appreciate the
simplicity and reliability of expressing my intent in assembler on a Motorola
68K. For me the combined syntax and semantics of C are, to this day, unsimple.
I never have to think about it any more because the grammer at least has now
worked its way down into the eproms of my knuckles and finger tips. I have, in
the last few years, played with snippets of code in various dialects of Lisp.
Lisp primitives, while at first confusing, are utterly simple. It seems to me
that Lisp needs no semantics beyond a personal evaluation of its primitives and
of course memory management is handled by a GC. The necessity of memory
managment in C is compounded by the fact that there is no well defined grammer
and semantics for managing it well and I feel that C++ is only slightly better
in this regard.
I am glad that C is available to me and that I can use it but I'm beginning to
think that I could have more fun in Lisp with the side effect that I would
accomplish more.

Jeff Dalton

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4eqh8l$c...@news2.ios.com> vl...@gramercy.ios.com (Vlastimil Adamovsky) writes:
>mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:
>
>>This is probably off-track, but, as a diversion, please bear this last
>>gripe of mine on the language which is going to be swapped away by
>>Java (no, it is not Lisp :) ).
>
>I wonder in what language is Java language implemented?

So what language _is_ it implemented in? C may be better than C++
for such things (I prefer it anyway).

-- jd

Paul Wilson

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <822848...@wildcard.demon.co.uk>,

Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>In article <4eb75f$l...@cnn.Princeton.EDU> tim@franck "Tim Hollebeek" writes:
>
>If you want to know about basic
>garbage collecting, I'd recommend a computer science book, as it'll
>probably be more up to date.

I have to disagree here. I know of no textbooks with even a decent
discussion of garbage collection. The set of people who write textbooks
seem to be entirely disjoint from the set of people who know about modern
GC technology. The textbooks are about as bad as the pop programming
books, and that's pretty bad indeed. (The main exception is Appel's
very brief paper on GC in the advanced programming language implemenation
book edited by Peter Lee. Appel's paper is good, but not broad enough
to cover the area.)

I suggest looking at the papers on our web site (in my .sig, below) which
include two surveys (long and medium-sized) on GC. (The long version will
appear in Computing Surveys after some revision.) There are also several
other papers there by my research group and a bunch by other people
(from the '91 and '93 OOPSLA GC workshops), and a big bibliography in
LaTeX .bib format. The web page also has links to Henry Baker's and Hans
Boehm's web pages.

Enjoy.

--
| Paul R. Wilson, Comp. Sci. Dept., U of Texas @ Austin (wil...@cs.utexas.edu)
| Papers on memory allocators, garbage collection, memory hierarchies,
| persistence and Scheme interpreters and compilers available via ftp from
| ftp.cs.utexas.edu, in pub/garbage (or http://www.cs.utexas.edu/users/wilson/)

Jeff Dalton

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <s08wx6a...@lox.ICSI.Berkeley.EDU> mar...@lox.icsi.berkeley.edu (Marco Antoniotti) writes:
>
>One of the things that bothered me most with C++, was this sort of
>"newspeak" which it introduced. For years people had been working in
>Flavors, Clos, Smalltalk etc, and they pretty much shared a common
>terminology. Then suddendly, we did not have "methods" any more, we
>had "member functions", we lost the "inheritance" (pun intended) and
>started "deriving classes".

In article <DLywC...@research.att.com> b...@research.att.com (Bjarne Stroustrup <9758-26353> 0112760) replied:

I think you have your dates wrong. The C++ terminology was picked in
1979. Then, the work on CLOS hadn't yet started, Smalltalk-80 hadn't
been completed, and its predecessor was not well known outside a small
circle of researchers. I don't recall the dates for Flavors and Loops,
but again these languages were not known outside the AI community for
quite a few years.

I think it's fine to pick a different terminology for C++, and
certainly so in 1979 when I don't think any OO terminology was
widely established. I can even be helpful to have > 1 terminology
if it lets us think about something in > 1 way.

On the other hand, Smalltalk terminology seemed to be fairly well
established by the time C++ started to be widely known. (When was
the Byte Smalltalk issue, for instance?) (We might also consider
the publication dates of various books.)

Flavors was significantly before CLOS. Canon's paper is, I think,
1980, and it says a practical implementation was in use since late
1979. Other Lisp OO systems were used earlier.

I knew something about Flavors (which is not really a language but
rather part of various languages in the Lisp family) no later than
mid-1980 (but I forget just when), even though I was not (and had
not been) in the AI community.

I knew something about Smalltalk as well, before Smalltalk-80.

(Not that my knowledge shows very much on its own.)

-- jeff

Stefan Monnier

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4eqh80$c...@news2.ios.com>,
Vlastimil Adamovsky <vl...@gramercy.ios.com> wrote:
] Why would you need to modify the vtable. Create a pointer to your

] table and modify it as you wish. By the way, can you modify C -

But then you have the vtable pointer plus your table pointer:
I call that a waste.
You don't have enough access to the representation to take advantage of it.
This is what I call "too high-level".
The "too low-level" stuff is the fact that casts are too powerful to enable the
creation of a copying GC unless you severely restrict the set of programs
correctly handled.

There are probably other examples of "too high" or "too low" levels in C++
(implicit calls to constructors is probably among the former)


Stefan

Vlastimil Adamovsky

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:

>I assume that Vlastimil advocates the use of INTERCAL as the proper
>language for Smalltalk and C++ environments construction. :)

Would you let me know in what news group os the INTERCAL discussed?
I don't work in academia so maybe I missing something..

Richard Pitre

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4eqh8l$c...@news2.ios.com> vl...@gramercy.ios.com (Vlastimil
Adamovsky) writes:
> mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:
>
> >This is probably off-track, but, as a diversion, please bear this last
> >gripe of mine on the language which is going to be swapped away by
> >Java (no, it is not Lisp :) ).
>
> I wonder in what language is Java language implemented? Why is Java a
> "simplified" C++ (according to the creators)?
> Because for some people the programming language has to be simple,
> otherwise they are not able to work with it. But they are able to
> write thick books about its dead.
>
> >One of the things that bothered me most with C++, was this sort of
> >"newspeak" which it introduced. For years people had been working in
> >Flavors, Clos, Smalltalk etc, and they pretty much shared a common
> >terminology. Then suddendly, we did not have "methods" any more, we
> >had "member functions", we lost the "inheritance" (pun intended) and
> >started "deriving classes".
> Methods in Smalltalk are implementations for a code to be executed
> when a message has been sent and the selector (that identifies the
> message ) will pick-up this this code. In Smalltalk you are "SENDING"
> messages, thus messages are not functions, it follows that member
> functions are not methods. You don't send functions, that's why you
> have to name them by the right name: MEMBER FUNCTIONS.
>
> >Of course, the argument is that C++ wanted to "clarify" such things
> >and the choice of new terminology was a "good thing".
> Your comment did clarify it?
>
> >Well, I must say that I am very pleased to see that Java somewhat
> >reintroduced the "old" terminology and that Lisp, (as well as Dylan)
> >is not yet dead.
> Is the "old" terminology more effecient than the "new" one? Do you
> have some benchmarks?
>
> >Half seriously yours
> No kidding...
> >International Computer Science Institute | mar...@icsi.berkeley.edu
> > ...it is simplicity that is difficult to make.
> Yes, and for some people it is difficult to deal with thing that are
> little more complicated than the simple one....
>

Not only is syntax and symantics of C++ complex, if a programmer is not
sensitive to C++ implementation issues they might lose much of the benefit of
C++ i.e. object oriented code that runs as efficiently as C code. So a good C++
programmer accounts for these implementation issues at the same time that they
deal with design issues like clear expression and extensibility. It seems to me
that worthwhile coding in C++ for its real benefits over other OOP languages
requires a higher degree of commitment and technical proficiency. The
implementation/efficiency issues as discussed by Stroustroup in his books are
nontrivial for many programmers and for programmers who are sometimes awakened
when their head hits the monitor its even more problematical. I'm not
suggesting that C++ is not a good thing. I'm glad to have it but I would not
minimize the seriousness of its manifest and hidden complexity.

richard

Cyber Surfer

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <DM5s1w.4D0...@cogsci.ed.ac.uk>
je...@cogsci.ed.ac.uk "Jeff Dalton" writes:

> On the other hand, Smalltalk terminology seemed to be fairly well
> established by the time C++ started to be widely known. (When was
> the Byte Smalltalk issue, for instance?) (We might also consider
> the publication dates of various books.)

August 1981. I still have that issue - it was the first copy of Byte
that I bought. It was only a few months later that I was able to
understand it, when I read it at 3 AM, one sleepless night. I skipped
the intro, and started with an article on _programming_. Then it clicked.

The August 1985 issue of Byte introduced me to the idea of functional
programming. It may have been less than a year later when I wrote my
first Lisp interpreter. A few years ago, I discovered Dylan...

> I knew something about Smalltalk as well, before Smalltalk-80.
>
> (Not that my knowledge shows very much on its own.)

True. ;-) However, it still tells us something significant about you.

Vlastimil Adamovsky

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>.....typical C++

>development system will cost about $300.

>So, it's there and its dirt cheap. No wonder so many people use it.
>--

And if somebody miss something that is not there, it is very easy to
make his own exrensions. For example two years ago I needed somthing
like "metaclasses" in my C++ program and RTTI was not available to me
yet. So have implemented a Windows specific version. It works great.
When I needed messaging system (to be able to send messages in
Smalltalk fashion) I could implement it. When I needed to implement
event handking mechanism, it was easy.
So I see a value in an extensibility of the language that is already
very powerful and complex (in positive sense).

Do I need Garbage Collection? NO! Never ever it was needed in my
programs. People crying for GC should see DB Tools.h++ written by
RogueWave and then they would understand why it is not necessary.

I would wish to our software industry more real programmers in the New
Year 1996.

Cyber Surfer

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4eqjlb$i...@news.ox.ac.uk>
bar...@xserver.sjc.ox.ac.uk "Daniel Barlow" writes:

> I don't know lisp. I've done little bits of elisp (but I understand that
> that Doesn't Count) and I've played with scheme (actually guile), but
> that's about all. I'd like to learn (curiosity value) but I'm
> holding off until I can think of something to write in it. And until
> I have more time.

Start by reading the Lisp FAQ, and then pick one of the tutorial books
mentioned in it. Don't start with any reference books, unless you already
know Lisp and perhaps need a good ref, or you want to write a compiler.
I can't comment on elisp or guile, as I've not used either of them, nor
am I likely to any time soon.



> The limiting factor is surely not cost. I have gcc and gcl on my
> computer; they were both entirely free. In fact, I understand that gcl
> comes precompiled as part of the popular Slackware Linux distribution.
> I know of approximately one linux user who actually installed it (except
> by accident). Why the low takeup?

I have a copy of Linux-FT, but I can't use it yet. A hardware problem
on the motherboard of my machine stops it from installing (it hangs).
NT installed on the same drive ok, but couldn't format it. Perhaps it
was the same hardware problem. (Dell are fixing it with a BIOS update.
Eventually.) Alternately, I could buy a new disk controller. However,
I'd also need to buy a new drive, too, as the 1st drive is now used
by NT, which I really _do_ need to use.

BTW, I'm _paid_ to use VC++. I know that gcc is available for Win32,
but I have no guarantee that it'll work with the tools I use, and
since one of those tools is MFC, I might as well use the MFC Wizards
and make the experience as painless as possible.

As for gcl, I don't know if that's available for NT yet.



> One possible consideration (at least among the unixheads that I swap
> opinions with) is that the only lisp most people on unix see is emacs.
> And emacs is big, slow, and stops regularly to tell you it's
> `garbage collecting'. This might not be representative of GC in general,
> but I bet a lot of people think it is.

I see Netscape frequently "pause", telling me that it is looking up a
name. As Henry Baker has pointed out, that tells us that DNS is slow.
It can be, in some cases. On a local network, it can be so fast that
you never see it. Perhaps the answer is simply to never tell the user
why a program pauses. No more "writing to file" msgs.

Cyber Surfer

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4eu5l9$5...@jive.cs.utexas.edu>
wil...@cs.utexas.edu "Paul Wilson" writes:

> >If you want to know about basic
> >garbage collecting, I'd recommend a computer science book, as it'll
> >probably be more up to date.
>
> I have to disagree here. I know of no textbooks with even a decent
> discussion of garbage collection. The set of people who write textbooks
> seem to be entirely disjoint from the set of people who know about modern
> GC technology. The textbooks are about as bad as the pop programming
> books, and that's pretty bad indeed. (The main exception is Appel's
> very brief paper on GC in the advanced programming language implemenation
> book edited by Peter Lee. Appel's paper is good, but not broad enough
> to cover the area.)

Was I refering to modern GC? I'm not sure. I don't know of any books
on modern GC, but a book 20 years old seems to contain GC techniques
that many C/C++ programmers are unaware of. Even if that's the best
book on the subject, it could still enlighten a few programmers.



> I suggest looking at the papers on our web site (in my .sig, below) which
> include two surveys (long and medium-sized) on GC. (The long version will
> appear in Computing Surveys after some revision.) There are also several
> other papers there by my research group and a bunch by other people
> (from the '91 and '93 OOPSLA GC workshops), and a big bibliography in
> LaTeX .bib format. The web page also has links to Henry Baker's and Hans
> Boehm's web pages.

I can't use LaTeX, and I don't currently have a Postscript viewer.
Perhaps most C/C++ programmers will be more fortunate, but I dunno.
However, I can view HTML pages, and I bet that almost everyone else
can, these days. I can certainly recommend Henry Baker's home page.
I'll have to look at Hans Boehm's pages soon.

Thanks.

Tim Hollebeek

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to wil...@cs.utexas.edu
Paul Wilson (wil...@cs.utexas.edu) wrote:
: In article <822848...@wildcard.demon.co.uk>,
: Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:

: >In article <4eb75f$l...@cnn.Princeton.EDU> tim@franck "Tim Hollebeek" writes:
: >
: >If you want to know about basic
: >garbage collecting, I'd recommend a computer science book, as it'll
: >probably be more up to date.

Be more careful with your attributions. I didn't write that.

--
Tim Hollebeek | Everything above is a true statement, for sufficiently
<space for rent> | false values of true.
Princeton Univ. | t...@wfn-shop.princeton.edu
-------------------| http://wfn-shop.princeton.edu/~tim

Cyber Surfer

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <DM09G0.MzG...@cogsci.ed.ac.uk>
je...@cogsci.ed.ac.uk "Jeff Dalton" writes:

> >> Just look at the technical strength of the argument that GC is not
> >> "in the tradition of the C community"...
> >
> >Yeah, I love it. ;-)
>
> But it _is_ true that GC is not in the tradition of the C community.
> The argument that it's a "hidden cost" is key here. C programmers
> feel that they know what everything will do in machine terms, and
> to a fair extent they are right. (That's so despite a number of
> difficulties and exceptions.)

That may be less true these days, with the widespread use of C++, for
the development of Windows software. There's a lot of "hidden cost" in
C++ frameworks. I don't think of these as exceptions, because of the
high popularity of these development tools. It's probably the programmers
who use C, but not large libraries, who are the exceptions.

Also, the future, at least for business software, looks more like
being document oriented, rather than application oriented. This will
introduce even more hidden costs.



> So when a allocation might do lots of collecting as well (or
> whatever), and you don't really know when, that seems to move
> C into the higher-level / less-in-touch-with-the-machine camp.

Exactly. The kind of APIs coming out of MS, Apple, and IBM these days
look more and more high level to me. C is looking less attractive than
C++, but the C++ frameworks that make complex APIs easier to handle
can hide even more - which is both a feature _and_ a problem.



> >Mind you, I'm very happy with a mark/compact GC, and I found one
> >in a computer science book, Fundamentals of Data Structures, by
> >E Horowitz and S Sahni. While they're not anti-GC, they refer to
> >Knuth and his belief that specialist languages such as Lisp and
> >SNOBOL are not necessary, and that list and string processing can
> >be done in any language. The languages that seem to have interested
> >them tend to be PL/I, Pascal, and Fortran. Not at all like Lisp.
>
> Well, surely it's true that list and string processing can be done
> in (almost) any language. I've done list processing in Basic, for
> instance. (Good Basics can, of course, do strings, so that's not
> interesting.)

I've done it in Forth. In fact, implementing Forth was the first time
I use used linked lists. Since then, I've looked for better ways (i.e.
better languages) for handling lists. The same applies to strings,
as I wrote my first string package in Forth. Hmmm. That was also my
first compiler...



> But there's a difference between a language is not necessary
> and saying it's not valuable, or not worth having and using.
> I'm not sure when Knuth stated this belief, but such points had
> a different role in the past then they tend to do today,
> because it was not so widely known that, or how, you could
> do list or string processing.

Agreed. The book I quoted from was already dated when I first read it,
in the early 80s. The Knuth quote could, for all I know, be _much_ older.



> A similar thing today (or maybe a few years back) might be to
> point out that you could do object-oriented programming in
> (almost) any language.

You can even do it in C++. ;-) I've not done any OOP in Forth, but
there's at least one book on the subject. However, I've not read it.
Perhaps you can also do functional programming in C/C++, but I've
not tried it. I _have_ written an experimental Lisp to C compiler,
which I should re-write someday. Anyone who has read a good Lisp
tutorial (W&H, SICP, etc) should be able to write a similar compiler.

Cyber Surfer

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4f02tb$5...@news2.ios.com>
vl...@gramercy.ios.com "Vlastimil Adamovsky" writes:

> Do I need Garbage Collection? NO! Never ever it was needed in my
> programs. People crying for GC should see DB Tools.h++ written by
> RogueWave and then they would understand why it is not necessary.

My experience is that programming in C, you develop a blind spot.
Because some things are hard to do, you don't see them as possible
ways of coding something.



> I would wish to our software industry more real programmers in the New
> Year 1996.

More programmers with blind spots? I won't argue with that. If I find
an edge, the last thing I'm going to do is tell everyone else about it.
If I call one of the "tricks" I use GC, and you tell me that it can't
be an edge, then I won't disillusion you.

I would wish to our software industry better tools and more productive
programmers - so long as their software isn't competing with mine,
that is. ;-) That's unlikely, so I'm not worried...

Matthias Blume

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to

In article <4f02tb$5...@news2.ios.com>
vl...@gramercy.ios.com "Vlastimil Adamovsky" writes:

> Do I need Garbage Collection? NO! Never ever it was needed in my
> programs. People crying for GC should see DB Tools.h++ written by
> RogueWave and then they would understand why it is not necessary.

My experience is that programming in C, you develop a blind spot.
Because some things are hard to do, you don't see them as possible
ways of coding something.

> I would wish to our software industry more real programmers in the New
> Year 1996.

More programmers with blind spots?

Well, I also think that the world would be a better place if it were
full of people like Vlastimil. We wouldn't be inhibited in our work
by unnecessary knowledge, we would be so full of ourselves... it would
be awesome. We would be all so smart, that real-life problems would
be so infinitely small that in order to spice things up we would use
all the wrong tools (or no tools at all) to get things done. Only
this way we would find excitemement and satisfaction in life.

I wish I were a Real Guy, with a macho attitude, just like Vlastimil.
BTW, women just *love* macho programmers.

Cheers,
--
-Matthias

Cyber Surfer

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <4f2ila$6...@jive.cs.utexas.edu>
wil...@cs.utexas.edu "Paul Wilson" writes:

> >> >If you want to know about basic
> >> >garbage collecting, I'd recommend a computer science book, as it'll
> >> >probably be more up to date.
> >>

> >> I have to disagree here. I know of no textbooks with even a decent

> >> discussion of garbage collection. [...]


> >
> >Was I refering to modern GC? I'm not sure. I don't know of any books
> >on modern GC, but a book 20 years old seems to contain GC techniques
> >that many C/C++ programmers are unaware of. Even if that's the best
> >book on the subject, it could still enlighten a few programmers.
>

> OK, agreed. There are some fundamental algorithms that *are* in some
> textbooks and should be better known.

I agree. I just suspect that those who think about these issues a lot
under estimate how a programmer who is unfamiliar with even basic GC
will react to the more advanced techniques. Disbelief is a common
reaction, which I'm sure you've also seen. Arthur C Clarke's third
law is worth quoting here, "Any sufficiently advanced technology is
indistingishable from magic". Confront a programmer with some "magic"
and they'll disbelieve it.

> On the other hand, even some of the textbooks that do discuss the
> fundamental algorithms often propagate naive views about GC that are rather
> damaging. (Henry Baker, Hans Boehm, and I have all put a fair amount
> of effort into trying to slow the spread of those myths on the net.)

And I hope you have a lot of success in your efforts, some of which I've
witnessed here on UseNet. However, you first have to kill the idea that
"successful GC" is magic. If it does work, then you'll have to offer
examples of "real world" uses of GC before a typical programmer will be
convinced. Most of the programmers I know are very suspicious of computer
science, which doesn't help. There's no way they'll ever bother reading
a paper about GC. On the other hand, they probably won't have read any
books on the subject, either.

This only leaves the word of other programmers, and the masses of code
that apparently has been written _without_ the use of GC. As I've said,
C/C++ programers see the cost of everything and the value of nothing.
Please note that I'm also a C/C++ programmers myself, and I also make
that mistake from time to time. Some languages let you focus on the
very small picture, instead of the big picture, where GC can help.

> This is also true of traditional allocators. The history of allocator
> research has been a big mess---the literature is a bit of a disaster
> area---and the textbooks reflect this. The analyses in the books are
> shallow and largely wrong. (This is less attributable to the textbooks
> authors than the weak discussions of GC. It's not their fault that they
> can't summarize the gist of the literature and get it right, because
> the literature in general is disorganized, inconsistent, and often wrong.)

I won't argue with that! ;-) I'd love to see a good summary.

> One of the problems in the area of traditional memory allocators is that
> people have taken one particular textbook far too seriously---Volume 1
> of Knuth's _The_Art_of_Computer_Programming_. It was written in 1968,
> and some of it has turned out to be less than helpful. It's still the
> standard reference, though, and other textbook writers often regurgitate
> its discussion of memory allocators. Implementors often look at it and
> go and implement things that have been known to be bad since the early
> 1970's. (Knuth is still tremendously influential in allocator work,
> despite the fact that he doesn't appear to have written anything about it
> in over 25 years. This is not Knuth's fault, of course---inertia makes
> the world go 'round.)

I also have that book, and the other 3 volumes. However, they didn't
stop me from writting and using a GC. If I wanted to write a floating
point package (very unlikely, even 10 years ago), then I might turn
to Knuth, but not for anything like memory management.

Perhaps I've been brainwashed by those evil people are Xerox PARC,
when I read the August 1981 issue of Byte. ;-) I doubt it. I just
don't have a blind spot that prevents me from seeing problems for
which a GC can help. As you said, Knuth's book is very old. In it
he refers to decimal computers! <ahem> Very dated, now.

> "Modified First Fit" with the roving pointer is the clearest example. It
> was a bad idea, and it was quickly shown to be bad, but some other textbook
> writers keep mindlessly cribbing from Knuth, and even some implementors still
> use it.

Is it really worse than Best Fit? I've wondered about that ever
since first read that book. You seem like a good person to ask. ;)

> Obligatory positive comment: the best textbook discussion of allocators
> that I know of is Tim Standish's in _Data_Structure_Techniques_. He doesn't
> recognize the near-universal methodological problems with allocator studies,
> but he's unique in recognizing the basic data structure and algorithm issues
> in implementing allocators.

I'll see if I can find that book. Thanks.

> This site also has our big survey on memory allocators from IWMM '95, which
> I hope will influence future textbooks. It talks about empirical methodology
> as well as giving a fairly exhaustive treatment of implementation techniques.

I FTP'd it a week or two ago. I intend to read it, as soon as I
get a (binary) copy of Ghostscript for NT. Thanks.

Ed Kaulakis

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to cyber_...@wildcard.demon.co.uk
If Vlastimil has never orphaned storage or followed a dead pointer and if CyberSurfer has never
needed a program to run in predictable real time with minimal hardware resources, we should all pay
attention to this thread. Otherwise, not.

Paul Wilson

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <823365...@wildcard.demon.co.uk>,
Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>In article <4eu5l9$5...@jive.cs.utexas.edu>

> wil...@cs.utexas.edu "Paul Wilson" writes:
>
>> >If you want to know about basic
>> >garbage collecting, I'd recommend a computer science book, as it'll
>> >probably be more up to date.
>>
>> I have to disagree here. I know of no textbooks with even a decent
>> discussion of garbage collection. [...]
>
>Was I refering to modern GC? I'm not sure. I don't know of any books
>on modern GC, but a book 20 years old seems to contain GC techniques
>that many C/C++ programmers are unaware of. Even if that's the best
>book on the subject, it could still enlighten a few programmers.

OK, agreed. There are some fundamental algorithms that *are* in some
textbooks and should be better known.

On the other hand, even some of the textbooks that do discuss the


fundamental algorithms often propagate naive views about GC that are rather
damaging. (Henry Baker, Hans Boehm, and I have all put a fair amount
of effort into trying to slow the spread of those myths on the net.)

This is also true of traditional allocators. The history of allocator


research has been a big mess---the literature is a bit of a disaster
area---and the textbooks reflect this. The analyses in the books are
shallow and largely wrong. (This is less attributable to the textbooks
authors than the weak discussions of GC. It's not their fault that they
can't summarize the gist of the literature and get it right, because
the literature in general is disorganized, inconsistent, and often wrong.)

One of the problems in the area of traditional memory allocators is that


people have taken one particular textbook far too seriously---Volume 1
of Knuth's _The_Art_of_Computer_Programming_. It was written in 1968,
and some of it has turned out to be less than helpful. It's still the
standard reference, though, and other textbook writers often regurgitate
its discussion of memory allocators. Implementors often look at it and
go and implement things that have been known to be bad since the early
1970's. (Knuth is still tremendously influential in allocator work,
despite the fact that he doesn't appear to have written anything about it
in over 25 years. This is not Knuth's fault, of course---inertia makes
the world go 'round.)

"Modified First Fit" with the roving pointer is the clearest example. It


was a bad idea, and it was quickly shown to be bad, but some other textbook
writers keep mindlessly cribbing from Knuth, and even some implementors still
use it.

Obligatory positive comment: the best textbook discussion of allocators


that I know of is Tim Standish's in _Data_Structure_Techniques_. He doesn't
recognize the near-universal methodological problems with allocator studies,
but he's unique in recognizing the basic data structure and algorithm issues
in implementing allocators.

>> I suggest looking at the papers on our web site (in my .sig, below) which


>> include two surveys (long and medium-sized) on GC. (The long version will
>> appear in Computing Surveys after some revision.)

This site also has our big survey on memory allocators from IWMM '95, which


I hope will influence future textbooks. It talks about empirical methodology
as well as giving a fairly exhaustive treatment of implementation techniques.

>> There are also several


>> other papers there by my research group and a bunch by other people
>> (from the '91 and '93 OOPSLA GC workshops), and a big bibliography in
>> LaTeX .bib format. The web page also has links to Henry Baker's and Hans
>> Boehm's web pages.

Vlastimil Adamovsky

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
je...@cogsci.ed.ac.uk (Jeff Dalton) wrote:

>In article <4eqh8l$c...@news2.ios.com> vl...@gramercy.ios.com (Vlastimil Adamovsky) writes:
>>mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:
>>
>>>This is probably off-track, but, as a diversion, please bear this last
>>>gripe of mine on the language which is going to be swapped away by
>>>Java (no, it is not Lisp :) ).
>>
>>I wonder in what language is Java language implemented?

>So what language _is_ it implemented in? C may be better than C++


>for such things (I prefer it anyway).

You can not compare C++ and C. Philosophy behind the language is quite
different.
It is like you would like to compare assembler and Eiffel. Which is
better?

Paul Wilson

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <823455...@wildcard.demon.co.uk>,

Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>In article <4f2ila$6...@jive.cs.utexas.edu>
> wil...@cs.utexas.edu "Paul Wilson" writes:
>
>> The history of allocator
>> research has been a big mess---the literature is a bit of a disaster
>> area---and the textbooks reflect this. The analyses in the books are
>> shallow and largely wrong. (This is less attributable to the textbooks
>> authors than the weak discussions of GC. It's not their fault that they
>> can't summarize the gist of the literature and get it right, because
>> the literature in general is disorganized, inconsistent, and often wrong.)
>
>I won't argue with that! ;-) I'd love to see a good summary.

Well, very briefly:

1. Traditional allocator research has largely missed the point, because
the true causes of fragmentation haven't been studied much. Most
studies (a recent exception being Zorn, Grunwald, and Barrett's
work at the U. of Colorado at Boulder, and Phong Vo's latest paper
from Bell Labs) have used synthetic traces whose realism has never
been validated, rather than real program behavior.

2. The standard synthetic traces of "program" behavior are unrealistic,
because they're based on pseudo-random sequences, even if the
object size and lifetime distributions are realistic. For good
allocator policies, this causes an unrealistically high degree
of fragmentation, because you're shooting holes in the heap at
random. For some programs and some allocators, though, you get
unrealistically low fragmentation. In general, the random trace
methodology makes most allocators' performance look pretty similar,
when in fact the regularities in real traces make some allocators
work *much* better than others.

3. People have often focused on the speed of allocator mechanisms,
at the expense of studying the effects of policy in a realistic
context. As it turns out, some of the best policies are amenable
to fast implementations if you just think about it as a normal
data-structures-and-algorithms problem. The best policies have
often been neglected because people confused policy and mechanism,
and the best-known mechanisms were slow.

4. People have usually failed to separate out "true" fragmentation
costs---due to the allocator's placement policy and its interactions
with real program behavior---from simple overheads of simplistic
mechanisms. Straightforward tweaks can reduce these costs easily.

5. The best-known policies (best fit and address-ordered first fit)
work better than anyone ever knew, while some policies (Knuth's
"Modified First Fit" or "Next Fit") work worse than anyone ever
knew, for some programs. This is because randomized traces tend
probabilistically to ensure that certain important realistic
program behaviors will never happen in traditional experiments.

6. None of the analytical results in the literature (beginning with
Knuth's "fifty percent rule") are valid. They're generally based
on two or three assumptions that are all systematically false for
most real programs. (Random allocation order, steady-state behavior,
and independence of sizes and lifetimes. To make mathematical
analyses tractable, people have sometimes made stronger and even
more systematically false assumptions, like exponentially-distributed
random lifetimes.)

7. Because of these problems, most allocator research over the last
35 years has been a big waste of time, and we still don't know
much more than we knew in 1968, and almost nothing we didn't
know in the mid-70's.

> [...]


>> "Modified First Fit" with the roving pointer is the clearest example. It
>> was a bad idea, and it was quickly shown to be bad, but some other textbook
>> writers keep mindlessly cribbing from Knuth, and even some implementors still
>> use it.
>
>Is it really worse than Best Fit? I've wondered about that ever
>since first read that book. You seem like a good person to ask. ;)

Yes, it is significantly worse, and some versions are often far worse, even
pathological. If you keep the free list in address order, it's just worse.
If you keep the free list in LIFO order, it can be pathological. (LIFO
order is attractive because it's cheaper than address order, and if all
other things were equal, it would improve locality.) For some programs,
it systematically takes little bites out of big free blocks, making those
blocks unusable for allocation requests of the same big size in the future.
Many traditional experiments have masked this effect by using smooth
distributions of object sizes, so that odd-sized blocks are less of a
problem.

(Given random sizes, you're likely to have a request that's a good fit
for the block size pretty soon, even after taking a bite out of it. For
real traces, you're likely *not* to get any requests for
comparable-but-smaller sizes anytime soon. And the randomization of
order tends to eliminate the potential pathological interactions due
to particular sequence behavior. You eliminate the extreme cases.)

>> Obligatory positive comment: the best textbook discussion of allocators
>> that I know of is Tim Standish's in _Data_Structure_Techniques_. He doesn't
>> recognize the near-universal methodological problems with allocator studies,
>> but he's unique in recognizing the basic data structure and algorithm issues
>> in implementing allocators.
>
>I'll see if I can find that book. Thanks.
>

>> [Our] site [http://www.cs.utexas.edu/users/oops/] also has our big survey

>> on memory allocators from IWMM '95, which I hope will influence future
>> textbooks. It talks about empirical methodology as well as giving a
>> fairly exhaustive treatment of implementation techniques.

--

Paul Wilson

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <823365...@wildcard.demon.co.uk>,

Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:
>In article <4eu5l9$5...@jive.cs.utexas.edu>
> wil...@cs.utexas.edu "Paul Wilson" writes:
>
>Was I refering to modern GC? I'm not sure. I don't know of any books
>on modern GC, but a book 20 years old seems to contain GC techniques
>that many C/C++ programmers are unaware of. Even if that's the best
>book on the subject, it could still enlighten a few programmers.

True. Simple GC techniques are acceptable for a fair number of applications.
My favorite example is scripting languages. Most scripting languages are
so slow that the cost of GC is negligible, even if it's implemented badly
by the standards of the state of the art. People often use reference counting
because that's what the C++ books give examples of, when mark-sweep would
work like a charm, and often when copying collection wouldn't be very
hard. Reference counting works well in the sense that it wastes little
space most of the time---getting space back immediately in most cases,
rather than waiting until the next GC---but for a slow language implementation,
mark sweep is just about as efficient if you crank the GC frequency up.

With a simple non-generational GC, you may pay a fair amount of CPU time
to get space back quickly, but for scripting languages that cost is still
usually swamped by the cost of interpretation. The real cost may be in
locality, because a simple GC touches most or all of the live data at
each GC cycle. So you'd rather have a generational GC.

And a generational GC is pretty easy to implement, too. Appel implemented
a decent generational GC for ML in 500 lines of C code. For fast implemen-
tations of general-purposes languages, you may want something a little
fancier (his write barrier is probably too simple), but not much fancier.
For a scripting language, a 500 line GC is probably plenty fast.

Marco Antoniotti

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
In article <4et7nt$1...@news2.ios.com> vl...@gramercy.ios.com (Vlastimil Adamovsky) writes:

From: vl...@gramercy.ios.com (Vlastimil Adamovsky)
Newsgroups: comp.lang.lisp,comp.lang.c++
Date: Fri, 02 Feb 1996 14:49:15 GMT
Organization: Internet Online Services
Lines: 13
NNTP-Posting-Host: ppp-38.ts-7.hck.idt.net
X-Newsreader: Forte Free Agent 1.0.82

Xref: agate comp.lang.lisp:20745 comp.lang.c++:172712

mar...@lox.icsi.berkeley.edu (Marco Antoniotti) wrote:

>I assume that Vlastimil advocates the use of INTERCAL as the proper
>language for Smalltalk and C++ environments construction. :)

Would you let me know in what news group os the INTERCAL discussed?
I don't work in academia so maybe I missing something..

I am surprised. How come a "real quiche-hating programmer" does not
know where INTERCAL is discussed? alt.lang.intercal, of course. :)

You can also check the INTERCAL entry in the "New Hacker's Dictionary"
and its references in the RETROCOMPUTING entry. :)

Not bad for somebody "who works in academia", isn't it? :)

Cheers


--
Marco Antoniotti - Resistente Umano
===============================================================================

International Computer Science Institute | mar...@icsi.berkeley.edu

1947 Center STR, Suite 600 | tel. +1 (510) 643 9153
Berkeley, CA, 94704-1198, USA | +1 (510) 642 4274 x149
===============================================================================

...it is simplicity that is difficult to make.

Phong Vo

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
Paul Wilson wrote:
>

> 1. Traditional allocator research has largely missed the point, because
> the true causes of fragmentation haven't been studied much. Most
> studies (a recent exception being Zorn, Grunwald, and Barrett's
> work at the U. of Colorado at Boulder, and Phong Vo's latest paper
> from Bell Labs) have used synthetic traces whose realism has never

^^^^^^^^^


> been validated, rather than real program behavior.
>

Just a quick note that I am now with the new AT&T Research.
The main topics of the paper that Paul mentioned are a new API for memory allocation
called Vmalloc and a performance study comparing it against a number of
well-known malloc implementations. The code is currently available for non-commercial
use for anyone interested at the below url:
http://www.research.att.com/orgs/ssr/reuse/
This address also has pointers to many other neat software tools from our group.

Phong Vo, k...@research.att.com

Vlastimil Adamovsky

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
Cyber Surfer <cyber_...@wildcard.demon.co.uk> wrote:

>My experience is that programming in C, you develop a blind spot.

I don't program in C.

>I would wish to our software industry better tools and more productive
>programmers

It can be expressed this way, yes...

Vlastimil Adamovsky

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
Ed Kaulakis <e...@mcig.com> wrote:

It could happen maybe three of four times (dead pointers) but I never
felt need of GC.
I compile a program, run some memory checking programs and that's it.

Bruno Haible

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
Daniel Barlow <bar...@xserver.sjc.ox.ac.uk> wrote:
>
> I don't know lisp. I've done little bits of elisp (but I understand that
> that Doesn't Count) and I've played with scheme (actually guile), but
> that's about all. I'd like to learn (curiosity value) but I'm
> holding off until I can think of something to write in it. And until
> I have more time.

You can write any kind of non-trivial text/data/representation processing
in Lisp. But you have to wait until you have _less_ time.

For example, a couple of days ago I wrote a smallie which converts a
unidiff (output of "diff -u") to the equivalent context diff (output
of "diff -c3"), for readability.

First wrote in Lisp, within 2 hours. Worked and still works fine.

The rewrote it in C++. Took me more than a whole day. Mostly works,
but the output contains garbled lines in rare cases. Hence the resulting
program is useless unless I put several more hours of debugging into it.

Needless to say, the C++ code is three times as large than the Lisp
code, and therefore doesn't reflect the overall algorithm as well.

Conclusion: The time you save by using a high-level language like Lisp and
an interactive development environment is amazing. I wish to invite you:
make the experience yourself, with a non-trivial program of yours.

> The limiting factor is surely not cost. I have gcc and gcl on my
> computer; they were both entirely free. In fact, I understand that gcl
> comes precompiled as part of the popular Slackware Linux distribution.
> I know of approximately one linux user who actually installed it (except
> by accident). Why the low takeup?

Maybe because gcl, as present in Slackware, is pretty spartanic:
no command-line history (unless you use Emacs), CLOS not built-in,
slow compiler. You should complement it with CLISP.

Bruno

----------------------------------------------------------------------------
Bruno Haible net: <hai...@ilog.fr>
ILOG S.A. tel: +33 1 4908 3585
9, rue de Verdun - BP 85 fax: +33 1 4908 3510
94253 Gentilly Cedex url: http://www.ilog.fr/
France url: http://www.ilog.com/


Bruno Haible

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
Vlastimil Adamovsky <vl...@gramercy.ios.com> wrote:
> Why would you need to modify the vtable. Create a pointer to your
> table and modify it as you wish. By the way, can you modify C -
> language compiler so it will behave as YOU want to?

Yes. gcc and g++ come with source. You want to add a warning, add a
built-in or fix a bug? You can.

Bruno

Bruno Haible

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
Jeff Dalton <je...@cogsci.ed.ac.uk> wrote:

>>> Just look at the technical strength of the argument that GC is not
>>> "in the tradition of the C community"...
>>
>>Yeah, I love it. ;-)
>
> But it _is_ true that GC is not in the tradition of the C community.
> The argument that it's a "hidden cost" is key here. C programmers
> feel that they know what everything will do in machine terms, and
> to a fair extent they are right. (That's so despite a number of
> difficulties and exceptions.)
>

> So when a allocation might do lots of collecting as well (or
> whatever), and you don't really know when, that seems to move
> C into the higher-level / less-in-touch-with-the-machine camp.

But apparently more and more C programmers learn and use C++. This
language also has hidden costs here and there:

- An assignment may involve more than moving around memory words,

- A method (a.k.a. "member function") call usually involves two
pointer accesses and two branches additionally to the C function
call,

- Virtual inheritance introduces hidden pointers within the objects,

- Exception handling adds tables whose size is comparable to the
size of the code,

and this is silently accepted by the C++ community. Exceptions
haven't been in the C tradition either, yet they are very welcome
in the C++ community. In fact, the C++ programmers are pushing the
C++ compiler vendors to implement these things.

So: why had the GC to wait for the arrival of Java until it became
widely accepted?


Bruno Haible email: <hai...@ilog.fr>
Software Engineer phone: +33-1-49083585


Bruno Haible

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
Marco Antoniotti <mar...@lox.icsi.berkeley.edu> wrote:
>
> But since we are at it, I want to complete the list of penances that
> have been required from me. (Please assume the warning is repeated here).
>
> ...
>
> 3 - Common Lisp is dead.

The king is dead, who's the next king? Let's try it:

"Hurray! Long live ISLisp!"

Well, ISLisp is not accepted because it doesn't come with C++ syntax.
So let's try it this way:

"Hurray! Long live Dylan!"

But Dylan is suddenly already dead as well. How come? Never mind, let's try
it this way:

"Hurray! Long live Java!"

Seems to be for real this time. But: Where are the macros?

Dazed and confused.

---

Half seriously yours,

Bruno


Vlastimil Adamovsky

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
bl...@zayin.cs.princeton.edu (Matthias Blume) wrote:


>I wish I were a Real Guy, with a macho attitude, just like Vlastimil.

I am happy I am your idol.

>BTW, women just *love* macho programmers.

I know.

William Paul Vrotney

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <4fjhu5$o...@nz12.rz.uni-karlsruhe.de>
hai...@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:

> Marco Antoniotti <mar...@lox.icsi.berkeley.edu> wrote:

Please don't be dazed and confused to the point of giving up on Lisp, we
need your dedication. And thank you for your contributions.

I look at it this way. Lisp is the ONLY language that is STILL ALIVE (after
all these years)! I am still making a living writing Lisp programs as well
as a number of other people I know. Not a lot, but Lisp certainly is not
dead by any means, and there is a good chance that it will be rediscovered
and thrive in the 21st century. The reason for this is that most currently
popular languages have and are discovering ideas that Lisp has already (not
to make a pun) addressed, but have not combined all of the ideas cohesively
as Lisp has done. I see most of these popular languages as addressing a
specific need and not a more general programming problem as Lisp has
addressed. If you solve somebody's specific problem with a specific
solution you are guaranteed short term success with that solution but not
necessarily long term success.

Perhaps not directly addressing your bedazzlement with Dylan and Java but
maybe some help to someone, I predict the "Static Typing" argument, which in
my opinion is the root of the current practical argument against Lisp with
any substance, will peter out in the near future. The reason being that it
is probably too hard for code analyzers to do a perfect job. And if it is
not perfect then it is not good enough to merit dealing with the
difficulties associated with it. The best written complex C++ programs,
from my experience, will eventually break unpredictably. Code "purifiers"
do a pretty good job, but once again they are not and probably can not be
made perfect, and hence the same argument applies. One of the difficulties
with Static Typing is that it impedes natural programming constructs that
support generality and hence further abstraction. Programmers have now seen
complex C++ projects fizzle due to becoming unmanageable from the standpoint
of building higher level abstractions simply because of the weight of the
artificial tricks built into the class hierarchy. These artificial tricks
are there simply to support the Static Typing pedantics, ie. in a
Dynamically Typed system they would not be there. C++ programmers,
including myself, are quite proud of these tricks when we discover them.
But basically we are all fooling ourselves and avoiding the higher level
view of things. Even the definition of the C++ language itself has to use
some artificial tricks for the language to be usable, but these make the
language itself more complex and harder to learn. Virtual Functions and
Templates are two good examples of such tricks. Clever, but unnatural and
necessary only to satisfy the Static Typing pendatics. Virtual Function
implementation in Lisp for example is just one of many ways to do dynamic
type dispatching, but it is not required in the definition of the language.
And Templates can be completely avoided. I'm still waiting for a GOOD
solution for heterogeneous container classes in C++, something you don't
even have to think about in Lisp.

So to sum up, I say

"Hurray, long live Lisp! But put up with specific solutions in
the mean time."


--

William P. Vrotney - vro...@netcom.com

Philip Jackson

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
William Paul Vrotney (vro...@netcom.com) wrote:

: I look at it this way. Lisp is the ONLY language that is STILL ALIVE (after
: all these years)!

Isn't Fortran still alive, and in widespread use for "scientific
programs"? If so, it's my understanding it originated about the same time
as Lisp.

Cheers,

Phil Jackson
------------------
"...for the word is the sole sign and the only certain mark of the
presence of thought hidden and wrapt up in the body..." -- Descartes
------------------
Standard Disclaimers. <pjac...@ic.net>


Richard Pitre

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <vrotneyD...@netcom.com> vro...@netcom.com (William Paul
Vrotney) writes:
> In article <4fmhie$t...@condor.ic.net> pjac...@falcon.ic.net (Philip

> Jackson) writes:
>
> William Paul Vrotney (vro...@netcom.com) wrote:
>
> : I look at it this way. Lisp is the ONLY language that is STILL ALIVE
(after
> : all these years)!
>
> Isn't Fortran still alive, and in widespread use for "scientific
> programs"? If so, it's my understanding it originated about the same time
> as Lisp.
>
> Are Roman numerals still alive? They are still in widespread use.
>
> Yes, FORTRAN and Lisp originated around the same time but there was no Ada,
> C, C++, Mathematica, Smalltalk ... etc back then to compete with. My
> understanding is that most of the FORTRAN used these days is old existing
> libraries or engineering applications. Probably few programmers today would
> start a new non-engineering project in FORTRAN. However this is not true of
> Lisp. Engineers would be more apt to choose one of the Math/Engineering
> interpreters/Lab instead of FORTRAN for numerical computation. Whether, if
> this is true, qualifies as being either dead or alive is a matter of
> semantics. I know of no one starting a brand new non-engineering project in
> FORTRAN today, if someone does, let us know, and I will either revise my
> statement, or advise the person doing such of other alternatives. However
> even if we put FORTRAN and Lisp in the same category, if Lisp is ONLY ONE OF
> TWO oldest programming languages STILL ALIVE today, that is still pretty
> remarkable.

>
> --
>
> William P. Vrotney - vro...@netcom.com

I don't have specific instances lined up for you but I believe that on big
parallel engines FORTRAN is becoming one of, if not the, primary programming
language. THE fortran data structure, arrays of fixed sized numbers, finds
nirvana in the happy space of the hardware. Fortran90 supports the construction
of other data structures but I predict that it will never be popular because
processing these other data structures will never be "efficient" on the
TVN/Fortran/C hardware. There is even a high performance fortran(HPF) standard.
As far as I'm concerned they might as well eprom the god forsaken
Fortran90/C/C++ compiler and make the standard shell an interactive
editor/debugger. The hardware is designed for the language and visa versa.
Don't get me wrong. There are lots of good uses for high speed
array/parallel-array processing and good-ole Fortran is about all that you need
for this.

Programming in C is the Boss Hoss thing to do these days. Most people
programming in C++ would have much greater success programming in Visual Basic.
Programmers who distribute their programs written in C++ should be forced to
include a warning. The warning should display when the program starts up and
the user should, at a minimum, be required to press Cntrl-Alt-Del in order to
proceed with the execution of the program. The warning should state that the
code was written in C++ and that the programmer has no credentials to support
any assumption of quality and that to the extent that you rely on the code for
anything important, the code is dangerous. A meta level warning of this nature
should be imposed on C++ compilers.

As with any toy C++ will eventually lose its popularity. Eventually the
association of sexual prowess with the ability to write the Hello World program
in C++ will fade. Then everyone will be left with the reality that C++ is
indispensible for a narrow range of problems but otherwise it is a millstone.
Just in case the reader is toying with the idea that I'm crazy, I will give
confirmation by saying that the notion of object-oriented-programming is
primarily necessitated by limitations of procedural programming. Its almost a
good idea.

Lisp was a good idea and then a big committee go hold of it and it got big.
With standardization it quit evolving to take advantage of the fruits of
research. While Lisp will always be a great tool and it will not die, it will
eventually be supplanted in its domain.

richard


William Paul Vrotney

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to

T. Kurt Bond

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to t...@wvlink.mpl.com
Richard Pitre wrote:
> Lisp was a good idea and then a big committee go hold of it and it got big.

Remember, Lisp is not just Common Lisp. What about the other interesting dialects,
such as EuLisp or ILOG Talk?

> With standardization it quit evolving to take advantage of the fruits of
> research.

Hmm. The standardization of another Lisp dialect, Scheme, appears to have slowed
considerably after its latest standard; perhaps because many of its implementors
use it as a research vehicle?

I really don't see Lisp in general as no longer evolving; I'm not even sure that
Common Lisp isn't still evolving.

--
T. Kurt Bond, t...@sol.newnet.navy.mil

Philip Jackson

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to
Richard Pitre (pi...@n5160d.nrl.navy.mil) wrote:
: In article <vrotneyD...@netcom.com> vro...@netcom.com (William Paul
: Vrotney) writes:
: > In article <4fmhie$t...@condor.ic.net> pjac...@falcon.ic.net (Philip
: > Jackson) writes:
: >
: > William Paul Vrotney (vro...@netcom.com) wrote:
: >
: > : I look at it this way. Lisp is the ONLY language that is STILL ALIVE
: > : (after all these years)!
: >
: > Isn't Fortran still alive, and in widespread use for "scientific
: > programs"? If so, it's my understanding it originated about the same
: > time as Lisp.
: >
: > Are Roman numerals still alive? They are still in widespread use.

Are Roman numerals a programming language? Answer = No, of course.

: >
: > Yes, FORTRAN and Lisp originated around the same time but there was no Ada,


: > C, C++, Mathematica, Smalltalk ... etc back then to compete with. My
: > understanding is that most of the FORTRAN used these days is old existing
: > libraries or engineering applications. Probably few programmers today would
: > start a new non-engineering project in FORTRAN. However this is not true of
: > Lisp. Engineers would be more apt to choose one of the Math/Engineering
: > interpreters/Lab instead of FORTRAN for numerical computation. Whether, if
: > this is true, qualifies as being either dead or alive is a matter of
: > semantics. I know of no one starting a brand new non-engineering project in
: > FORTRAN today, if someone does, let us know, and I will either revise my
: > statement, or advise the person doing such of other alternatives.

: > However
: > even if we put FORTRAN and Lisp in the same category, if Lisp is ONLY ONE OF
: > TWO oldest programming languages STILL ALIVE today, that is still pretty
: > remarkable.

Agreed. Lisp is a great language and I hope it stays around much longer.
Fortran is a great language also, for what it does, though of course there's
no comparison with Lisp. Perhaps Lisp:Fortran::Fortran:Roman Numerals? :-)

: I don't have specific instances lined up for you but I believe that on big

: parallel engines FORTRAN is becoming one of, if not the, primary programming
: language. THE fortran data structure, arrays of fixed sized numbers, finds
: nirvana in the happy space of the hardware. Fortran90 supports the construction
: of other data structures but I predict that it will never be popular because
: processing these other data structures will never be "efficient" on the
: TVN/Fortran/C hardware. There is even a high performance fortran(HPF) standard.
: As far as I'm concerned they might as well eprom the god forsaken
: Fortran90/C/C++ compiler and make the standard shell an interactive
: editor/debugger. The hardware is designed for the language and visa versa.
: Don't get me wrong. There are lots of good uses for high speed
: array/parallel-array processing and good-ole Fortran is about all that you need

: for this.[...]


Thanks Richard for providing this information.

William Paul Vrotney

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to

In article <4fooft$8...@condor.ic.net> pjac...@falcon.ic.net (Philip
Jackson) writes:
> Richard Pitre (pi...@n5160d.nrl.navy.mil) wrote:
> : In article <vrotneyD...@netcom.com> vro...@netcom.com (William Paul
> : Vrotney) writes:
> : > In article <4fmhie$t...@condor.ic.net> pjac...@falcon.ic.net (Philip
> : > Jackson) writes:
> : >
> : > Isn't Fortran still alive, and in widespread use for "scientific
> : > programs"? If so, it's my understanding it originated about the same
> : > time as Lisp.
> : >
> : > Are Roman numerals still alive? They are still in widespread use.
>
> Are Roman numerals a programming language? Answer = No, of course.
>

The point that I was trying to make here is to raise the question

"What do we mean when we say non-bio something is alive?"

And in particular what do we mean when we say that a programming language is
alive. In my last post I suggested that it might mean that programmers
would use it to write NEW programs in a NEW project. I didn't think that
this was widely true of FORTRAN anymore, much as I don't think that it is
widely true that people would use Roman numerals to compute anymore.

I didn't intend to get off on this FORTRAN tangent, sorry. If people are in
fact using FORTRAN to start new projects then I stand corrected and change
my statement to "Lisp is ONE of the oldest programming languages STILL
ALIVE", although that doesn't have quite the impact. The idea here was to
compare Lisp with all other programming languages and not just the oldest
like FORTRAN. The intent of this idea was to point out that Lisp is still
being used to start NEW projects combined with the fact that it has been
around for a long time. People who say "Lisp is dead" need to answer to
this. I didn't mean to put down FORTRAN, I meant to uplift Lisp.

Richard Pitre

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to
In article <311F90...@sol.newnet.navy.mil> "T. Kurt Bond"

Lisp is probably one of the best environments for experimenting with new
ideas, languages and algorithms. It takes another level of effort to actually
extend the Lisp development enviroment and the experimenter usually doesn't
have the background or time to make it so. For example its easy to implement
Prolog in Lisp but its quite another matter to make the unification algorithm
use your Lisp function definitions(e.g. see the language Escher). Addressing
fundamental semantic issues, completeness and soundness, and a slew of
practical problems is too much for me and most people that I know. Of all the
languages that have been standardized Lisp is probably the one that is least
restricted by that standardization. I haven't been distinguishing between
dialects of Lisp.

It would be a great thing if some recognized Lisp language experts got together
and began work on a specification of a new standard for Lisp that incorportated
the more useful functionality of logic and constraint logic programming.
Standardized graphical lisp readers and writers would also be useful.
A committee of one person or three very like minded persons might accomplish
something really good for Lisp. A bigger committee would guarantee the need for
several gigabytes of memory just to handle the standard function library and
an expert in the intricacies of the new Lisp Canon Law to interpret programs.

richard


Justin Cormack

unread,
Feb 15, 1996, 3:00:00 AM2/15/96
to


In article <vrotneyD...@netcom.com>
vro...@netcom.com (William Paul Vrotney) wrote

> Perhaps not directly addressing your bedazzlement with Dylan and Java but
> maybe some help to someone, I predict the "Static Typing" argument, which in
> my opinion is the root of the current practical argument against Lisp with
> any substance, will peter out in the near future. The reason being that it
> is probably too hard for code analyzers to do a perfect job. And if it is
> not perfect then it is not good enough to merit dealing with the
> difficulties associated with it. The best written complex C++ programs,
> from my experience, will eventually break unpredictably. Code "purifiers"
> do a pretty good job, but once again they are not and probably can not be
> made perfect, and hence the same argument applies. One of the difficulties
> with Static Typing is that it impedes natural programming constructs that

> support generality and hence further abstraction. ...

Actually it is *not* too hard for code analysers to do the job of static
typing and there are algorithms that provably work based on unification.
Languages based on this include Miranda and Haskell, which allow polymorphic
functions (that is functions that can take any of several types, see
example below) to be defined without losing the benefits of
static, compile time type checking, but without any run time
overheads, and with reusable higher order functions. This is fairly recent
work (the earliest implementations were around 1985) so came too late for
Lisp. In the long term the fact that this can be done is probably the
best argument against Lisp and also the unduly restrictive static typing
of C/Pascal-like languages which as you rightly say makes reusability
very difficult.

As an example (in Miranda), the function map can be defined which applies
a provided function over each element in a list:

: is (infix) cons, [] is nil; function application is just a space so f x is
Lisp (f x)

map f [] = []
map f (x:xs) = f x : map f xs

The compiler determines that map has type

map :: (* -> **) -> [*] -> [**] || funny Miranda notation for types...

ie take a function from one type to another a list of items of the
first type and return a list of items of the second type. Then it can
give you a compile time type error if you map the function add 2
to a list of characters but not if it is given a list of numbers.

Essentially the idea is that any function that does not make any use
of the types of its arguments can be given arguments of any type, and
it can also be guaranteed that no run-time type checking is needed
either.

Have not got the references to hand - email me or try
http://www.lpac.ac.uk/SEL-HPC/Articles/GeneratedHtml/functional.type.html

Justin Cormack
j.co...@ic.ac.uk

Jeffrey Mark Siskind

unread,
Feb 18, 1996, 3:00:00 AM2/18/96
to
In article <4g00le$g...@oak77.doc.ic.ac.uk> jp...@doc.ic.ac.uk (Justin Cormack) writes:

Languages based on this include Miranda and Haskell, which allow polymorphic
functions (that is functions that can take any of several types, see
example below) to be defined without losing the benefits of
static, compile time type checking, but without any run time
overheads, and with reusable higher order functions. This is fairly recent
work (the earliest implementations were around 1985) so came too late for
Lisp.

See Stalin, a implementation of R4RS Scheme, available free from my home
page. It does exactly this kind of type inference (and a lot more).

In the long term the fact that this can be done is probably the
best argument against Lisp and also the unduly restrictive static typing
of C/Pascal-like languages which as you rightly say makes reusability
very difficult.

Quite to the contrary. The fact that this can be done is probably the best
argument for languages like Lisp that omit type declarations and allow the
compiler to infer them, and against languages that require the user to
manually provide type declarations.

Jeff (home page http://www.cs.toronto.edu/~qobi)
--

Jeff (home page http://www.cs.toronto.edu/~qobi)

Justin Cormack

unread,
Feb 18, 1996, 3:00:00 AM2/18/96
to

In article <QOBI.96Fe...@ee.technion.ac.il>, qo...@ee.technion.ac.il (Jeffrey Mark Siskind) writes:
|> See Stalin, a implementation of R4RS Scheme, available free from my home
|> page. It does exactly this kind of type inference (and a lot more).
|>
|> In the long term the fact that this can be done is probably the
|> best argument against Lisp and also the unduly restrictive static typing
|> of C/Pascal-like languages which as you rightly say makes reusability
|> very difficult.
|>
|> Quite to the contrary. The fact that this can be done is probably the best
|> argument for languages like Lisp that omit type declarations and allow the
|> compiler to infer them, and against languages that require the user to
|> manually provide type declarations.

I think we both agree here actually. If Lisp evolves in this direction
it will be fine. Need to add the ability to optionally declare types
too (or the error messages are too unhelpful...)

kas...@user1.channel1.com

unread,
Feb 21, 1996, 3:00:00 AM2/21/96
to
My $.02 on all of this: My company sells a CL-based software system
that is used primarily by mechanical engineers. There is a fair
number of CL programmers here. The users must write code CL (with
our extensions) to use the tools we provide. In addition to using CL,
the users often use Fortran, simply because so many engineering
problems have already been coded in Fortran. Both languages are alive
and well.

--
Rich Kasperowski Cambridge, MA, USA
home: kas...@user1.channel1.com
work: ri...@concentra.com +1-617-229-4637

0 new messages