Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Guthery slams OOP in latest DDJ

4 views
Skip to first unread message

Cliff Joslyn

unread,
Nov 20, 1989, 8:43:34 PM11/20/89
to

I'm not an OOP programmer, but I read a real convincing argument in the
latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
Oriented?", pp. 80-86 by Scott Guthrey.

He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".

And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code, is not found in any production programming
environment with which I am familiar. Let's not forget the Intel
432...which was OOP in silicon, and it failed because it was just too
slow. If we couldn't make OOP efficient in hardware, why do we think we
can make it efficient when we emulate it in software?"

And the conclusion: "OOP runs counter to much prevailing programming
practice and experience: in allocating and controlling software costs,
in modularity, in persistent state, in reuse, in interoperability, and
in the separation of data and program. Running counter to the
prevailing wisdom does not, of course, automatically make an innovation
suspect but neither does it automatically recommend it. To date, in my
opinion, advocates of OOP have not provided us with either the
qualitative arguments or the quantitative evidence we need to discard
the lessons painfully learned during the first 40 years of programming".

Any comments?

--
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjo...@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Binghamton NY 13901
V All the world is biscuit shaped. . .

Joseph N. Hall

unread,
Nov 21, 1989, 1:15:38 AM11/21/89
to
In article <26...@bingvaxu.cc.binghamton.edu> cjo...@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>
>I'm not an OOP programmer, but I read a real convincing argument in the
>latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
>Oriented?", pp. 80-86 by Scott Guthrey.
>...

>And more: "...the programmer is invited to pass the cost of expedience
>onto the user of the system. This wholesale sacrificing of runtime
>efficiency to programmer's convenience, this emphasis on the ease with
>which code is generated to the exclusion of the quality, usability, and
>maintainability of that code, is not found in any production programming
>environment with which I am familiar. ...

Pardon me, but is he discussing any OOP environment that is actually in
use??? How does C++ sacrifice runtime efficiency in ANY manner? (I mean,
if you've got to have virtual something you've got to have virtual
SOMETHING ...) How is Smalltalk deficient (compared to, say, FORTRAN)
in the quality, usability and maintainability of code? Does Mr. Guthrey
perhaps prefer ADA? (Or, then again, maybe that's what he things represents
the future of OOP.)

DDJ has lately become a nasty little magazine in which I've seen all kinds
off-the-wall viewpoints ... as an aside, I remember a few months ago when
one of the contributors stated flatly that there was no viable system for
winning at blackjack and that we shouldn't bother him with letters saying
there was ... now I have the perspective of someone who worked with a friend
on a thorough, closed numerical analysis of Atlantic City BJ and I can tell
you that their expected take per hand IF you play correctly is about .5%,
and that if you count correctly you can periodically find the deck in such
shape that YOU can eke out .1-.5% -- it's pretty tough on your brain, though.
But anyway, this lofty, uninformed tone is too prevalent in DDJ for me
nowadays and I just don't enjoy reading the magazine.

Sigh ... guess I'll have to go out and get this one, though ...

As others have said and others will say after me, if you want convincing
arguments FOR OOP, consult Bertrand Meyer's "Object-Oriented Software
Construction" -- heavy Eiffel slant, but the first few chapters are quite
general and tremendously readable.

v v sssss|| joseph hall || 4116 Brewster Drive
v v s s || j...@ecemwl.ncsu.edu (Internet) || Raleigh, NC 27606
v sss || SP Software/CAD Tool Developer, Mac Hacker and Keyboardist
-----------|| Disclaimer: NCSU may not share my views, but is welcome to.

leslie.b.locklear

unread,
Nov 21, 1989, 2:13:58 AM11/21/89
to
In article <26...@bingvaxu.cc.binghamton.edu>, cjo...@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>
> I'm not an OOP programmer, but I read a real convincing argument in the
> latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
> Oriented?", pp. 80-86 by Scott Guthrey.
>
Stuff deleted...

> Any comments?

Yes. This article, like so many in DDJ was bogus. The author carefully
concocted his arguments with the absolute worst examples that he could find
in the OOP world and presented the problems as if each one were typically found
in *every* OOP language, program, and system. What is worse is that he did
not acknowledge that the very same problems exist in the structured programming
way of doing things.

The following comments are little gripes for those who have read the article.
If you haven't read it, you probably want to hit n now.

His example of the Intel 432 was especially poor. Do all object oriented
languages use the 432 model? Not at all. Why didn't he mention SOAR? Or
has he even heard of it?

I'm glad that Grace Hopper used OOP in 1944. I wish he would have shown us
some of her code -- it would have made interesting reading. Unfortunately I
doubt that *he* has seen it. And what language did she use? What's that you
say? Object oriented assembler? Well, it doesn't surprise me that the author
feels that assembler is an OOP. After all, he thinks that Algol and Fortran
II *FULLY* supported objects. Get real!

The author thinks that Stroustrup wrote an article in 1980 that described C++
and complains that 2.0 is incompatible with that description and that its
description isn't complete yet. He goes further and gripes saying "Can
anything that's this hard to define be good for writing understandable and
maintainable programs?" Well, I'm not going to argue the merits of C++, but
a language called "C with classes" was described in the paper referenced by
the author. It wasn't C++. Furthermore, I understand that the author's
beloved Fortran language description isn't complete yet. And it won't be
as long as people use Fortran.

The author also writes at some length about how ineffecient OOP languages are,
why reusability is bad, etc. He also makes some arguments as to how inheritence
is misused and why it is bad for team projects. To back up his assertions he
gives anecdotal evidence about some projects he was involved with. Well,
I am quite sure there are plenty of bad examples that can be given for both
OOP and structured methods. I've seen a few with my own eyes. The author
fails in this area, as in the rest of the article, to give enough concrete
information to back up his claims. He doesn't say what language was used, how
many people were involved, what the goals of the projects were, what kind of
support the management gave the projects, and last but certainly not least,
how much experience the team members had with the language used and OOP in
general. Content free arguments such as his are to be ignored.

I could go on, but to sum it up, the author makes up his own definition of
what he thinks object oriented programming is and proceeds to blast it
using poorly constructed, hear-say arguments. The article is virtually
free of any hard information that might back up the author's assertions.

I do have to agree with the author on one thing. There is far too much hype
being spread about the benefits that object-oriented programming will bring
to us all. I think that OOP is a GOOD THING and an improvement over the
structured methodologies that languages such as Algol, Fortran, etc. have
spawned. However, we still have a long road to travel before the benefits of
OOP are realized for most of us.

>
> --
> O------------------------------------------------------------------------->
> | Cliff Joslyn, Cybernetician at Large, cjo...@bingvaxu.cc.binghamton.edu
> | Systems Science, SUNY Binghamton, Binghamton NY 13901
> V All the world is biscuit shaped. . .

Barry Locklear
AT&T Bell Labs
attunix!lbl
l...@sf.att.com
(415) 324-6019

David R. Stampf

unread,
Nov 21, 1989, 9:09:32 AM11/21/89
to
>I'm not an OOP programmer, but I read a real convincing argument in the
>latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
>Oriented?", pp. 80-86 by Scott Guthrey.
>

Stop reading arguments and try some OOP programming! Then read
the arguments and make up your own mind ;-)



>He says a lot of things, but one quote: "Stripped of its fancy jargon,
>an object is a lexically-scoped subroutine with multiple mutiple entry

>points and persistent state. ...

I'm glad he stripped the jargon. Too bad in removing the jargon
he left off inheritance, polymorphism and encapsulation - which are central
to an OOP approach to problem solving.



>OOP has been around since subroutines were
>invented in the 1940s. Objects were fully supported in the early
>programming languages AED-0, Algol, and Fortran II. OOP was, however,
>regarded as bad programming style by Fortran aficionados".
>

It's important to separate objects (data + methods) from the
rest of OOP. Fortran had multiple entry points (blech!) but didn't have
static data (i.e. persistent state) unless you think that global (common)
data qualifies. You might want to read Peter Wegner's "Dimensions of
Object-based Language Design" in OOPSLA '87 proceedings, where he
distinguishes between object-based and class-based and object-oriented
languages.

A real problem is that languages like Fortran do not map well onto
a total OOP approach to problem solving, and partial comparisons and
implementations are sadly lacking.

>... " Let's not forget the Intel


>432...which was OOP in silicon, and it failed because it was just too
>slow. If we couldn't make OOP efficient in hardware, why do we think we
>can make it efficient when we emulate it in software?"

The 432 was far ahead of its time - hardware and software wise.
Condemning OOP because of the 432 is like condemning aviation because
DaVinci's helicopter wouldn't fly. If the 432 were being designed today
with our accumulated knowledge of design of really big and complex ICs,
it would have a far better chance of succeeding.

OOP provides a view of modeling that does not necessarily match
the von Neuman view of computing machinery. But then, all languages are
compromises of the problem set and the target hardware. If your sole
measurement of success is execution efficiency on a particular machine,
then use an assembly language - but then, what can you do while waiting
for the user to hit return?

Basic and more recently Hypercard have succeeding more due to
their accessibilty to programmers (and users!) than to their run time
efficiency.

>And the conclusion: "OOP runs counter to much prevailing programming
>practice and experience: in allocating and controlling software costs,
>in modularity, in persistent state, in reuse, in interoperability, and
>in the separation of data and program. Running counter to the
>prevailing wisdom does not, of course, automatically make an innovation
>suspect but neither does it automatically recommend it. To date, in my
>opinion, advocates of OOP have not provided us with either the
>qualitative arguments or the quantitative evidence we need to discard
>the lessons painfully learned during the first 40 years of programming".
>

Anything new runs counter to prevailing practice and experience by
definition. We owe it to ourselves to investigate *new* approaches to
problem solving. The resurging interest in OOP seems to be coming at
at time when machines are powerful enough to support such concepts *and*
our software technology is also growing. (Compare the initial release
of Smalltalk on the Mac from Apple to Digitalk as an example. It's hard
to believe the performance differences between the two!)

Finally, prevailing practice and experience are not necessarily
the same thing as prevailing *wisdom*, in fact, the growth in the number
of articles on OOP indicate that the wisdom is growing. If this wisdom
shows that OOP is good, practice and experience will follow. If not,
it then we still will have learned something. Either case, we win.

It is kind of scary to think that the Fortran model - one of our
first tries at programming - was the right way ;-).

< dave stampf

Andrew Koenig

unread,
Nov 21, 1989, 9:26:22 AM11/21/89
to
In article <29...@cbnewsl.ATT.COM>, l...@cbnewsl.ATT.COM (leslie.b.locklear) writes:

> I do have to agree with the author on one thing. There is far too much hype
> being spread about the benefits that object-oriented programming will bring
> to us all.

Hear, hear! Down with strongly-hyped languages!

Object-oriented programming is a tool. A useful tool indeed,
that can be applied in many contexts -- but not all! Part of
being a professional is knowing what tools are available and
how and when to apply them.
--
--Andrew Koenig
a...@europa.att.com

Lee Sailer

unread,
Nov 21, 1989, 10:49:44 AM11/21/89
to
I read an earlier version of this article. It was so packed with half
truths and incorrect statements that I filed it in the round file.

lee

Daniel S. Riley

unread,
Nov 21, 1989, 11:41:27 AM11/21/89
to
>I'm not an OOP programmer, but I read a real convincing argument in the
>latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
>Oriented?", pp. 80-86 by Scott Guthrey.
>
>He says a lot of things, but one quote: "Stripped of its fancy jargon,
>an object is a lexically-scoped subroutine with multiple mutiple entry
>points and persistent state. OOP has been around since subroutines were
>invented in the 1940s. Objects were fully supported in the early
>programming languages AED-0, Algol, and Fortran II. OOP was, however,
>regarded as bad programming style by Fortran aficionados".

I'm not an OOP programmer either, and I haven't seen the article. I am
a Fortran programmer, most of the time, and, from the quotes here it
sounds like this article is a good nominee for "most bogus article
appearing in a formerly reputable journal." An object implemented as a
Fortran subroutine with multiple entries allows only *one* instance of
the object, unless you explicitly set up and save state for each instance.
This hardly qualifies as "fully supported."

What the author calls "OOP" is indeed (usually) bad programming style in
Fortran. This reflects the lack of useful support in the Fortran language
for this style of programming; it says nothing at all about the usefulness
of OOP techniques in general (or specifically in languages designed to
support OOP).

-Dan Riley (ri...@tcgould.tn.cornell.edu, cornell!batcomputer!riley)
-Wilson Lab, Cornell University

Cliff Joslyn

unread,
Nov 21, 1989, 2:36:28 PM11/21/89
to

Well, this seems to be a consensus. Does *anyone* support *anything*
Guthrie says?

Dario Alcocer

unread,
Nov 21, 1989, 2:45:22 PM11/21/89
to
> I'm not an OOP programmer, but I read a real convincing argument in the
> latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
> Oriented?", pp. 80-86 by Scott Guthrey.
>
> [text deleted -- d.a.]

>
> And more: "...the programmer is invited to pass the cost of expedience
> onto the user of the system. This wholesale sacrificing of runtime
> efficiency to programmer's convenience, this emphasis on the ease with
> which code is generated to the exclusion of the quality, usability, and
> maintainability of that code, is not found in any production programming
> environment with which I am familiar. Let's not forget the Intel
> 432...which was OOP in silicon, and it failed because it was just too
> slow. If we couldn't make OOP efficient in hardware, why do we think we
> can make it efficient when we emulate it in software?"
>
> And the conclusion: "OOP runs counter to much prevailing programming
> practice and experience: in allocating and controlling software costs,
> in modularity, in persistent state, in reuse, in interoperability, and
> in the separation of data and program. Running counter to the
> prevailing wisdom does not, of course, automatically make an innovation
> suspect but neither does it automatically recommend it. To date, in my
> opinion, advocates of OOP have not provided us with either the
> qualitative arguments or the quantitative evidence we need to discard
> the lessons painfully learned during the first 40 years of programming".
>
> Any comments?
>
> O------------------------------------------------------------------------->
> | Cliff Joslyn, Cybernetician at Large, cjo...@bingvaxu.cc.binghamton.edu
> | Systems Science, SUNY Binghamton, Binghamton NY 13901
> V All the world is biscuit shaped. . .

After reading the article, I initially thought that the article was a slam
against OOP, but after I thought about what Scott said, it seems to me after
some reflection that much of his critism concerned the current
_implementations_ of OOP. What I liked about the article was that it took
a critical look at OOP, and hopefully now some the drawbacks to current
implementations of OOP can be addressed, remebering that OOP should not be
a panacea but used when appropriate.

I did have some observations about what he said about object heirarchies:

"The unit of reuse in object-oriented programming is the
heirarchy. It is the heirarchy and not the object that is
the indivisible whole. Unlike a subroutine library where
you can just take what you need and no more, in object
oriented programming you get the whole gorilla even if you
just wanted the banana." -- Scott Guthery

I think this could definitely be the case under some implementations,
where the link stip is static, done before run-time. In this case, unused
methods are still included, even though during run-time you may not use
them. People have seen this happen, I believe that the 'linker' that
comes with Turbo Pascal 5.5 tries to strip unneeded methods while making the
binary executable.

However, I think a better approach is already possible. Provided module
granularity is sufficiently fine to increase performance, unused methods
could be kept on disk, being paged in as needed, and if their use during
run-time decreases, could be paged out. This can be done today
in OS/2, Windows, and any other OS that does paging at run-time. However,
it seems to me (correct me if wrong) that this paging will have to OOP
sensitive to be effective, and I don't know enough if this mechanism could
co-exist with preexisting software that is not OOP-developed. It is being
looked into, witness the anouncement of a new memory-management sytem
developed for Quattro Pro by Borland, called VROOMM (Virtual Real-time Object
Oriented Memory Manager) that uses an LRU algorithm to keep most frequently
used objects in RAM.

Related to this, Scott also had some observations about performance-tuning
OOP systems, where he related the ineffiency of OOP systems to that of the
Intel 432 project. I think there is one difference between software and
and firmware; firmware is usually needs to be pretty static, while software
has the necessary environment to be more dynamic in its use of primary and
auxiliary memory. I think the key here and debugging OOP-based systems is
that our run-time environment will have to change in order to actively
support OOP systems. In terms of debugging, the OS will have more flexible
in the handling of run-time errors, and will have to give the developer more
information about the state not only of the machine, but of the OOP system
as well; its like the next logical progression from assembly-level
debugging to source-level debugging, to (now) object-level debugging.

In summary, I think that the OS, or operating environment, will have to
change to allow OOP to be used effectively. What we have seen now is really
only a change in languages and development tools; what we need now are
operating environments that actively support OOP systems, while providing
the performance user demand and the control developers need.


Hasta...

Dario

+===============================+==================================+
| Dario Alcocer (San Diego, CA) | Internet......dar@nucleus.mi.org |
| via Nucleus | phone...............619-450-1667 |
+===============================+==================================+

Paul Vaughan

unread,
Nov 21, 1989, 4:55:17 PM11/21/89
to

I've thought about a few of these issues, and I can see some merit to
comparing OOP with some outdated programming practices, but this
Guthery fellow has simply got it wrong.

From cjo...@bingvaxu.cc.binghamton.edu and his quote of Guthery:


He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".

This would be true if it were only possible to make a single instance
of each object class. By making a C file with static variables and
several different functions that manipulate those variables, you can
get something close to object oriented programming. However, this
style is NOT OOP. It is difficult to maintain and has been abandoned
by OOP practioners for very good reason. Guthery's mention of
multiple entry points isn't even close.

And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code, is not found in any production programming
environment with which I am familiar.

This hardly deserves a reply, especially given the data reviewed on
the net recently showing comparable speed for C and C++ programs. One
point I'd like to make though is that sometimes the programmer is the
user. That is, programmers USE library code and system facilities to
program. Less sophisticated users also write code on a routine basis,
only that fact is usually obscured by a fancy interface, like a
spreadsheet.

Now for what I feel is a valid criticism. Object Oriented
Programs are generally not reentrant. This is generally not important
to most application writers, but an OO OS or embedded systems writer
would need to recognize that.

Paul Vaughan, MCC CAD Program | ARPA: vau...@mcc.com | Phone: [512] 338-3639
Box 200195, Austin, TX 78720 | UUCP: ...!cs.utexas.edu!milano!cadillac!vaughan

Dan Weinreb

unread,
Nov 21, 1989, 7:21:33 PM11/21/89
to

He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".

He misses the point entirely. Fortran II and Algol are sort of like
object-oriented programming where you can only have one instance of
any given class. This makes it pretty useless.

Of course, you can do object-oriented programming in any language
whatsoever. The X Window System is written in C and uses
object-oriented programming with inheritance. The language (C) does
not provide *support* for this style, so it ends up being verbose and
clumsy, but it does work. Another interesting example is Sun's RPC,
the source code of which is free (you can get it from uunet). It uses
very simple object-oriented programming in C; no inheritance, but you
can clearly see that it's OOP with instances and with the equivalent
of virtual functions. They again have to do all the work by hand, but
they follow consistent stylistic guidelines so it doesn't mess the
code up too much. If Sun RPC were not written in that style, it would
be a much worse piece of software. They use OOP in order to allow
extensibility: several alternative network protocols, several
alternative authentication methods, several kinds of XDR stream.

And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency

This, of course, depends a lot on how much runtime efficiency is being
"sacrificed". The overhead in C++ is very small; this has been discussed
to death on this newsgroup.

to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code,

That's really beyond the pale. OOP, in my experience, properly used,
greatly increases quality, usability and maintainability.

is not found in any production programming
environment with which I am familiar. Let's not forget the Intel
432...which was OOP in silicon, and it failed because it was just too
slow. If we couldn't make OOP efficient in hardware, why do we think we
can make it efficient when we emulate it in software?"

What an illogical argument. Some people at Intel figured out how to
do something very slowly, and they used hardware to do it,
therefore... Anyone who believes this has his brain turned off.

To say that this piece is poorly reasoned is to be too kind.

Dan Weinreb Object Design d...@odi.com

Bruce Cohen;685-2439;61-028

unread,
Nov 21, 1989, 8:40:01 PM11/21/89
to
OK, I'll take the bull by the horns here:
>I'm not an OOP programmer, but I read a real convincing argument in the
>latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
>Oriented?", pp. 80-86 by Scott Guthrey.
>
>He says a lot of things, but one quote: "Stripped of its fancy jargon,
>an object is a lexically-scoped subroutine with multiple mutiple entry
>points and persistent state.

This is not a correct definition of OOP for the following reasons:
1) It completely ignores the issues of inheritance, data abstraction,
and data-hiding.

2) Objects aren't necessarily lexically-scoped (look at ST-80 blocks,
for instance).

3) (Maybe a bit picky as to terminology, but ...) an object really
can't be compared to a single subroutine, or even a single coroutine. It
might be better compared to a multi-threaded server, which provides
services (one per method) to access and modify a persistent state.

> OOP has been around since subroutines were
>invented in the 1940s. Objects were fully supported in the early
>programming languages AED-0, Algol, and Fortran II. OOP was, however,
>regarded as bad programming style by Fortran aficionados".

Again, not true if you look at the issues in 1) above.

>
>And more: "...the programmer is invited to pass the cost of expedience
>onto the user of the system. This wholesale sacrificing of runtime
>efficiency to programmer's convenience,

Ask a veteran C++ programmer about this "wholesale sacrificing". I have
programmed in OO Lisp, Smalltalk, and C++ (I've been primarily a
systems-level C programmer for the last six years, and before that I did a
lot of assembly language). I don't feel that I have sacrificed performance
to anything; in some cases I doubt that it would have been at all possible
to code the sorts of things I have done in C++ in C and get more than a few
percent improvement, and doing so would have made the projects I was
working on impractical (see my next comment).

> this emphasis on the ease with
>which code is generated to the exclusion of the quality, usability, and
>maintainability of that code, is not found in any production programming
>environment with which I am familiar.

Then it should be, because all three of those qualities are intimately
intertwined with ease of programming. Using OOP has allowed me to build
systems which are much MORE robust against changes, and much easier to
modify and enhance than would have been practical in C without much more
time and energy, which would not have been allowed on the projects.
Certainly, the data encapsulation alone is a major factor in increased
quality. Granted, that's not unique to OOP, but the behavior encapsulation
(inheritance and overriding of methods) is unique to OOP, and it is an
additional factor in improving quality.

> Let's not forget the Intel
>432...which was OOP in silicon, and it failed because it was just too
>slow. If we couldn't make OOP efficient in hardware, why do we think we
>can make it efficient when we emulate it in software?"

Yes, let's not forget it, but let's remember why it failed (I was in
another division at Intel at the time, and had reasonably good visibility
into the 432 project; in fact I had a prototype 432 board running in my
Intel blue box!). The problem wasn't OOP, it was in the implementation of
the silicon, and the way the software was layered on it. For instance, in
the Rev. 3 chip set, simply turning the access control data structures
upside down and addressing them off the object data pointers speeded almost
everything up by a factor of two. Also, because they couldn't fit as many
gates on a chip as they had originally planned, the processor had to be
put in two chips, with a 16-bit bus in between, which slowed things up
some. And so on. By the time they had the project far enough along to
start doing even the simplest optimizations, Marketing had already
published benchmarks which said that the 432 couldn't beat a lame dog into the
bath water, and the resultant public bad-mouthing killed any chance of
selling to anyone.

>And the conclusion: "OOP runs counter to much prevailing programming
>practice and experience: in allocating and controlling software costs,
>in modularity, in persistent state, in reuse, in interoperability, and
>in the separation of data and program.

I can't resist commenting on this remark about the separation of data and
program. It shows to me that the author of the article doesn't understand
OOP implementation at all. Certainly there are some OOPLs which allow you
to execute data (that's what LISP does for a living), but in the normal
course of events (i.e., if you don't twist things around to make it happen
yourself), neither Smalltalk nor C++ will do so. An object's state is
data, the methods which operate on it are code, and the two are separate.
You might choose which method to use on a given bit of state, or which
state to operate on, by looking at which or which kind of object you have,
but that's just normal programming practice. If you have ever coded a
finite-state machine you have built what amounts to an instance of an
object. Send it an input (invoke a method) and it will change its state
and (maybe) provide some output.

> Running counter to the
>prevailing wisdom does not, of course, automatically make an innovation
>suspect but neither does it automatically recommend it. To date, in my
>opinion, advocates of OOP have not provided us with either the
>qualitative arguments or the quantitative evidence we need to discard
>the lessons painfully learned during the first 40 years of programming".

On the contrary, OOP builds on lessons such as data abstraction.

>
>Any comments?
>

The rhetoric is very dramatic, but hardly compelling. I would be swayed
more if I believed that the author knew more about the subject being thrashed.

"Small men in padded bras don't look the same falling from high places."
- R.A. MacAvoy, "The Third Eagle"
Bruce Cohen
bru...@orca.wv.tek.com
Interactive Technologies Division, Tektronix, Inc.
M/S 61-028, P.O. Box 1000, Wilsonville, OR 97070

Mark T Vandewettering

unread,
Nov 21, 1989, 8:53:23 PM11/21/89
to
In article <46...@ncsuvx.ncsu.edu> j...@ecemwl.UUCP (Joseph N. Hall) writes:
>In article <26...@bingvaxu.cc.binghamton.edu> cjo...@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>>I'm not an OOP programmer, but I read a real convincing argument in the
>>latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
>>Oriented?", pp. 80-86 by Scott Guthrey.
>>...
>>And more: "...the programmer is invited to pass the cost of expedience
>>onto the user of the system. This wholesale sacrificing of runtime
>>efficiency to programmer's convenience, this emphasis on the ease with
>>which code is generated to the exclusion of the quality, usability, and
>>maintainability of that code, is not found in any production programming
>>environment with which I am familiar. ...

This is not the least bit compelling, because it is seriously incorrect
in many ways. The first and foremost is that there is no one language
which is efficient or inefficient. Individual implementations are
either relatively efficient or relatively inefficient. This goes for
any program.

There has been a large amount of interest in C++ lately. Why? Because
it does present a better model for programming than C, with minimal
performance penalty.

And please illuminate me (this is rhetorical, not directed at any
of the people whose text is included here) as two how you can generate
code easily, without it being quality, usable and maintable? Any code
which is not the last three can hardly be called a program. (As an
aside, I find it amusing that we call programs "codes")

>Pardon me, but is he discussing any OOP environment that is actually in
>use??? How does C++ sacrifice runtime efficiency in ANY manner? (I mean,
>if you've got to have virtual something you've got to have virtual
>SOMETHING ...) How is Smalltalk deficient (compared to, say, FORTRAN)
>in the quality, usability and maintainability of code? Does Mr. Guthrey
>perhaps prefer ADA? (Or, then again, maybe that's what he things represents
>the future of OOP.)

Smalltalk was typically implemented as an interpreter of course, which
made its environments much easier to use than your typical FORTRAN
system of course :-) It is comparing apples and oranges to compare the
speed of FORTRAN to the speed of Smalltalk. There have of course been
work on making Smalltalk a more efficient language, most notably the
SOAR project, which had some impressive results.

Like every other overly hyped topic, I feel that OOP has its merits and
its drawbacks. I never tire to hear of legitimate and well thought out
criticisms, or equally deserved praise. Mr. Guthry on the other hand,
succeeds in making himself look foolish by perhaps arguing a valid
point, but for the wrong reasons.

Mark VandeWettering (even I don't share my opinions, why should my employer)

Mark T Vandewettering

unread,
Nov 21, 1989, 9:06:39 PM11/21/89
to

Gosh, I realized there was stilly more to bash Guthry for:

> He says a lot of things, but one quote: "Stripped of its fancy jargon,
> an object is a lexically-scoped subroutine with multiple mutiple entry
> points and persistent state. OOP has been around since subroutines were
> invented in the 1940s. Objects were fully supported in the early
> programming languages AED-0, Algol, and Fortran II. OOP was, however,
> regarded as bad programming style by Fortran aficionados".

Nope. Sorry. Uh uh. Wrong.

I might actually agree with the definition of objects. If you chose to
implement an object in a language, say Scheme, one obvious
implementation is as a closure with a message dispatcher (multiple entry
points) and persistant state maintained inside the closure.

What about inheritance? Instantiation of multiple instances of a class?
There is more to objects than just lexical closures.

And of couse, the languages he dredges up had absolutely no support for
object oriented programming. Sure, you might be able to fake something
that vaguely resembled an object, but it is obvious that you are faking
it.

> Let's not forget the Intel
> 432...which was OOP in silicon, and it failed because it was just too
> slow. If we couldn't make OOP efficient in hardware, why do we think we
> can make it efficient when we emulate it in software?"

I haven't forgotten it: has Mr. Guthry ever read anything other than
glossy BS about it?

I hate this argument. First of all, the 432 had many other goals
besides being an effective object oriented processor. It was supposed
to be easily entendible to multiprocessor configurations (requiring no
code changes), data protection, a sophisticated fault mechanism, with
recovery. It had many noble goals. It succeeded at alot of them.
It was a huge chip. It was slow. It had no cache. It was a commercial
failure. So what? It succeeded at several of its goals.

But just building something "in hardware" doesn't make it fast. It
takes an appropriate design, both for the chip, compilers and programs.
That technology is improving.

Mark VandeWettering

Gary Murphy

unread,
Nov 22, 1989, 9:09:40 AM11/22/89
to
I had the pleasure of reading a pre-release of Scott's paper, which
I found somewhat humourous, and somewhat disturbing. While he has
some good points, there is little in what he says that does not apply
to all programming and especially to any new paradigm. I don't want
to go in to this at length, but I will say that issues such as non-
standard object representation, the requirement for development tools
and the supposed counter-evolutionary aspects are greatly overstated
and were as true for the early C or Pascal implementations as they
are for the present state of OOP. Does anyone know of two prologs
which use the internal structures or even the same syntax? Can you
say "proprietary technology"?

I haven't seen the DDJ print, but my advance notes include a summary
of "Twenty-Five Reasons" in 9 sections. Under "Theory", all of the
points apply equally to all programs using structures and pointers,
"Development" makes one valid point of there being a lack of proper
tools (Scott thinks toolmaking is bad, fortunately he doesn't build
ships), with 7 other points which apply to all development, "Debugging"
give 5 non-points where the fifth merely complains that the old tools
are not applicable, "Maintenance" likewise applies in all 7 points to
many programming systems, "Persistent State" shows three points common
in any program which uses trees & graphs, and the remaining sections
likewise jump on general problems as if they were peculiar to OOP.

This is not to praise OOP or to can Scott's well-said article, it's
just that I wish he'd stayed with 9 or ten good points rather than
bloating otherwise valid arguments with non-issues. For instance
(oops! can I say 'instance'?), he complains that OOP requires retooling
the development organization without regard for the retooling we did
when we shifted from patch-cords to punch-cards to consoles or from
programming in Hex to Fortran and later to C, Prolog or whatever. He
adds that he doesn't want YAPL (yet another programming language).

I think we all should read this. Everyone here found it at least
amusing. He has some very valid points which should be addressed by
any OOP plans and clearly dispels some of the marketting hype
surrounding OOP. I'd caution anyone not to rely on this paper when
deciding for or against OOP, but to instead use it as a guide for
which questions to ask.--
Gary Murphy decvax!utzoo!dciem!nrcaer!cognos!garym
(garym%cogno...@uunet.uu.net)
(613) 738-1338 x5537 Cognos Inc. P.O. Box 9707 Ottawa K1G 3N3
"There are many things which do not concern the process" - Joan of Arc

Walter Bright

unread,
Nov 22, 1989, 2:20:14 PM11/22/89
to
In article <46...@ncsuvx.ncsu.edu> j...@ecemwl.UUCP (Joseph N. Hall) writes:
<one of the contributors stated flatly that there was no viable system for
<winning at blackjack and that we shouldn't bother him with letters saying
<there was ...
<I can tell
<you that their expected take per hand IF you play correctly is about .5%,

Granted. But lets look at some figures:
pay_per_hour = rate_of_win * size_of_bet * bets_per_hour
Assuming:
rate_of_win = .005 (.5%)
size_of_bet = $500 (house limit)
bets_per_hr = 20 (3 minutes per hand)
gives a pay_per_hour of $50/hr. Hardly raking it in for such exhausting work.
There are also serious problems because this is an *average*, which means
there are possibilities of going $10 grand in the hole pretty easilly.
You're gonna need a large bankroll and a lot of guts to stick it out till
the odds have a chance to work for you.

I'll stick to software, which pays better, and the work is easy :-).
Of course, I don't have a pretty girl plying me with drinks while I program :-)

All in all, I'd have to agree that blackjack is not viable. Gambling in
Vegas is strictly for 1) suckers and 2) entertainment.

Wayne Throop

unread,
Nov 22, 1989, 4:48:47 PM11/22/89
to
> cjo...@bingvaxu.cc.binghamton.edu (Cliff Joslyn)

> I'm not an OOP programmer, but I read a real convincing argument in the
> latest Doctor Dobb's Journal #158, "Are the Emporer's New Clothes Object
> Oriented?", pp. 80-86 by Scott Guthrey. [...]
> Any comments?
> [... rebuttals omitted ...]

> Well, this seems to be a consensus. Does *anyone* support *anything*
> Guthrie says?

Sort of. He makes a couple of good points addressed towards current
*implementations* of OOPLs, but has trouble even coming close towards
the broader side of the barn, that is addressing the general notion
of object-oriented program construction as an abstract method.

I'll address his points as laid out in the sections of the article:

- Where's the Evidence? [..for productivity increase..]

Guthery piles up some assumptions and points to some studies showing
little or no gain in productivity. Yet I've seen dozens of measurements
showing that OOPLs result in equivalent functionality with fewer lines
of code, and with shorter debugging times. I made such a small study
myself (though admitedly, my OOPL wasn't really one, lacking inheritence...
nevertheless). OOPLs aren't the silver bullet, but I'd say there's
plenty of evidence that they aren't chopped liver, either.

- What is an Object?

Guthery's explanation of what an object is "stripped of all the jargon"
is so far away from accuracy that it is hard to point out just where
his analogy IS accurate. Mainly, it is accurate in code locality and
the association of multiple names for operations. The analogy he
presents fails in most else, and yet these other points are what he
associates with OOPLs, and condemns them for. MOST bogus. One of
these, the "persistence" point, is addressed in more detail later
by Guthery, so I'll address it in more detail there also. Here I'll
just say that characterizing the specification of the operations
available on an object as "multiple entry points" is as blatantly
misleading as anything I've ever heard. And I've heard some doozies.

- What's the Cost of OOP Code Reuse?

Having claimed that things in OOPland haven't been measured in the
"Where's the Evidence?" section, Guthery nevertheless ploughs ahead
and makes bald unsupported statements such as "Your system will be
bigger, run slower, and cost more to maintain than with subroutine-
style reuse." Where is the evidence of this, other than Guthery's
handwaving? It runs counter to my experience.

- How to Combine Object Heirarchies

Breifly, the examples he gives of this are bogus, and are better
represented as objects containing other objects as part of their
state rather than of examples inheritence. So the answer to the
question of "how to combine object heirarchies" is "certainly not
the way Guthery recommends or imagines it to be done".

- How to Tune an Object-Oriented Program?

Here the issue of monitoring and debugging OOPLs is butchered. It is
true that some OOPLs are worse off than some procedural languages as far
as debugging and monitoring tools, C++ in particular having debugging
problems. But this problem is not universal, and many of the
performance problems Guthery attributes to OOP methods are in fact
generic code re-use problems. For example, "the performance problem may
be in the code of a class you didn't implement", which is the same as
objecting to the use of the standard C libraries because they might
contain some slow routines that could be bettered. If a given class
from a library is too slow, fix it (if you own or control the owner of
the library), negotiate to get it fixed (if you can negotiate with the
owner), or create your own class with equivalent interface. These are
the same options open to somebody who finds that (say) qsort(3) is a
bubble sort in some unfortunate instance of the C library.

So, while C++ may be somewhat hard to debug and monitor, this is primarily
because the most common C++ compiler (cfront) produces no persistent
debugging database for any debugger or monitor to use. In cases where
this problem doesn't exist (eg: G++ and gdb, or SmallTalk systems),
debugging and monitoring facilities are excellent.

And of course, the side-issue of the Intel 432 is NOT a slam on OOP methods.
I've seen many convincing analyses of why the 432 was slow, and it was
NOT because of OOP.

- Do Object Oriented Programs Coexist?

Here Guthery makes one of his near-approaches to sense.
OOPLs, because they hide their data representations, can make it
arbitrarily hard to co-exist with other OOPLs or even other procedural
languages. Yet these problems aren't problems of an OOPL, they are
problems of ALL languages where the language system takes extreme pains
to keep control of data formats and releive the user of housekeeping
chores. For examples Lisp systems (or in general, systems with extreme
garbage collection). Yet even in these systems, it is not universal
that they cannot deal with foreign language systems (DG Lisp for AOS/VS
is the case I'm most familiar with in this regard). And, of course, C++
can deal with foreign code quite well.

- What are the Consequences of Persistent State?

The argument here is silly. Objects have state no more persistent than
primitive datatypes. Guthery's argument is like complaining that a
record type representing a symbol-table node is an evil thing, because
it contains persistent state. The state is no more persistent than the
object itself, and might well be automatic, and well scoped indeed. As
mentioned above, this whole argument is based on the idea that objects
are "really" groups of functions with a single instance of shared data
of process persistence. This is simply not the case.

- Can You Get the Development System Out of the Production System?

Here Guthery makes the nearest approach to sense that he manages.
But again, this is not a problem with OOPLs, but with all languages
with extensive and interdependant libraries. Lisp systems are
particularly vulnerable to this, because references to parts of the
system can easily be discovered only as late as runtime. SNOBOL
systems also. So again, the problem of "applications delivery"
as I've heard it called is simply not a problem of OOPLs in general,
(for example, it isn't much of a problem in C++, really). It is
more of a problem of systems that blur the distinction between
compile-time and run-time (that is, interpretive systems).

So, to sum up, Guthery is simply missing the point most of the time.
His faulty analogy has led him on a wild goose chase after purported
problems with OOP with are really problems either with his mistaken
analogy, or with some (but not all) OOPLs.

So, put it this way.

The Baby: OOP as a method of design and implementation.
The Bathwater: Some insular or immature OOPLs.

It just doesn't make sense to throw out the former while changing
the latter.
--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!thr...@rti.rti.org

geoff george

unread,
Nov 22, 1989, 10:04:49 PM11/22/89
to
> He says a lot of things, but one quote: "Stripped of its fancy jargon,
> an object is a lexically-scoped subroutine with multiple mutiple entry
> points and persistent state. OOP has been around since subroutines were
> invented in the 1940s. Objects were fully supported in the early
> programming languages AED-0, Algol, and Fortran II. OOP was, however,
> regarded as bad programming style by Fortran aficionados".

"Stripped of their fancy jargon, 'break', 'continue' and 'return' statements
are simply goto statements. The goto statement has been around since
programming languages were invented in the 1940s. It was fully supported in
the early programming languages (...). The goto statement was, however,
regarded as bad programming style by structured programming aficionados."

Therefore, of course, they are just as bad as goto's, and should be
eschewed for the same reasons.

:-)

--
geoff george Internet: fro...@pyr.gatech.edu
uucp: ...!{decvax,hplabs,ihnp4,linus,rutgers,seismo}!gatech!gitpyr!frobozz

"Ordinary f---ing people - I hate 'em. Ordinary person spends his life avoiding tense situations; repo man spends his life getting INTO tense situations."

Mark T Vandewettering

unread,
Nov 22, 1989, 11:36:48 PM11/22/89
to
A digression has appeared here on a topic of relative interest to
myself. I avoided responding to it originally, but will do now. Any
followups should go to alt.gambling, where such discussions are common.
I have added it to the newsgroup line, please delete comp.object.

>Granted. But lets look at some figures:
> pay_per_hour = rate_of_win * size_of_bet * bets_per_hour
>Assuming:
> rate_of_win = .005 (.5%)
> size_of_bet = $500 (house limit)
> bets_per_hr = 20 (3 minutes per hand)
>gives a pay_per_hour of $50/hr. Hardly raking it in for such exhausting work.
>There are also serious problems because this is an *average*, which means
>there are possibilities of going $10 grand in the hole pretty easilly.
>You're gonna need a large bankroll and a lot of guts to stick it out till
>the odds have a chance to work for you.

There is no table in Vegas that takes 3 minutes to play a single
blackjack hand. They should be able to sustain a much higher rate. You
also have neglected the possibility of a single player playing multiple
hands.

Other ways to extract maximum profit in minimum time is to vary betsize,
although doing so is a pretty sure fire way to tip of the house that you
are counting.

>All in all, I'd have to agree that blackjack is not viable. Gambling in
>Vegas is strictly for 1) suckers and 2) entertainment.

If you are gambling, you are a sucker. If you have an edge (and in
blackjack, it might be possible depending on the rules in effect at a
particular casino) then its just cashing a paycheck.

mark

Richard Sargent

unread,
Nov 23, 1989, 8:43:47 AM11/23/89
to
a...@alice.UUCP (Andrew Koenig) wrote in <10...@alice.UUCP>
> Date: 21 Nov 89 14:26:22 GMT

>
> In article <29...@cbnewsl.ATT.COM>, l...@cbnewsl.ATT.COM (leslie.b.locklear) writes:
>
> > I do have to agree with the author on one thing. There is far too much hype
> > being spread about the benefits that object-oriented programming will bring
> > to us all.
>
> Hear, hear! Down with strongly-hyped languages!
>

I take exception to this phrasing! We want to keep the language.
It's the silly hype that we want to get rid of.

Richard Sargent Internet: ric...@pantor.UUCP
Systems Analyst UUCP: uunet!pantor!richard

BJORNERSTEDT Anders

unread,
Nov 23, 1989, 11:37:36 AM11/23/89
to
In article <41...@cadillac.CAD.MCC.COM> vau...@mcc.com (Paul Vaughan) writes:
>
> Now for what I feel is a valid criticism. Object Oriented
>Programs are generally not reentrant. This is generally not important
^^^^^^^^^^^^^

I dont understand you. What prevents a C++, ObjectiveC or Eiffel
program from being reentrant ?

---------------------------------------------------------------------
Anders Bjornerstedt E-mail: and...@cuisun.unige.ch
Centre Universitaire d'Informatique
12 rue du Lac, CH-1207 Geneva
---------------------------------------------------------------------
Tel: 41 (22) 787.65.80-87 Fax: 41 (22) 735.39.05
Home: 41 (22) 735.00.03 Telex: 423 801 UNI CH
---------------------------------------------------------------------

Marco S Hyman

unread,
Nov 23, 1989, 2:04:29 PM11/23/89
to
I treated the original (DDJ) article as flame bait and tried not to get
hooked. However, In article <15...@bnlux0.bnl.gov> d...@bnlux0.UUCP
(David R. Stampf) makes an excelent point:
... then we still will have learned something. Either case, we win.

Some forget that progress is only made when people are willing to try new
things, some of which will fail.



It is kind of scary to think that the Fortran model - one of our
first tries at programming - was the right way ;-).

Without the will to try new things we'd all be stuck with the Fortran model
-- or worse. OOP is just one of the ``new things'' (only 20 years old or
so) to try. Try it. Make up your own mind.

// marc
--
// Marco S. Hyman {ames,pyramid,sun}!pacbell!dumbcat!marc

Jonathan Shapiro

unread,
Nov 23, 1989, 5:19:39 PM11/23/89
to
In article <10...@alice.UUCP> a...@alice.UUCP (Andrew Koenig) writes:
>Hear, hear! Down with strongly-hyped languages!
>--
> --Andrew Koenig
> a...@europa.att.com

Strong hyping is for weak whines...

Jonathan S. Shapiro
Silicon Graphics, Inc

Tom Frauenhofer

unread,
Nov 27, 1989, 11:17:29 AM11/27/89
to
> He says a lot of things, but one quote: "Stripped of its fancy jargon,
> an object is a lexically-scoped subroutine with multiple mutiple entry
> points and persistent state. OOP has been around since subroutines were
> invented in the 1940s. Objects were fully supported in the early
> programming languages AED-0, Algol, and Fortran II. OOP was, however,
> regarded as bad programming style by Fortran aficionados".

Personally, I regard Fortran programming as bad style myself ( :-) ).

Seriously, what a load of ____! This is an apples and oranges argument. Of
course OOP is bad Fortran style - Fortran isn't an OOP. So what? I don't
see a point here.

> And more: "...the programmer is invited to pass the cost of expedience
> onto the user of the system. This wholesale sacrificing of runtime
> efficiency to programmer's convenience, this emphasis on the ease with
> which code is generated to the exclusion of the quality, usability, and
> maintainability of that code, is not found in any production programming
> environment with which I am familiar.

Programming convenience != sacrificing runtine efficiency.
Ease of maintenace != sacrificing runtime efficiency.

Should we sacrifice programming convenience and ease of maintenance for
improved runtime performance? I want them all, and most serious software
developers do, too. Any tradeoffs are made on a case-by-case basis. OOP is
a tool.

Many people can write efficient, maintainable code in a "convenient to use"
OOP. I know that I am capable of writing inefficient, unmaintainable code
in a "less-than convenient" language like C and Fortran. It's how you use
the tool as much as it is what the tools give you.

> Let's not forget the Intel
> 432...which was OOP in silicon, and it failed because it was just too
> slow. If we couldn't make OOP efficient in hardware, why do we think we
> can make it efficient when we emulate it in software?"

The 432 is not slow because it is "OOP in silicon." It was slow partly
because of its Hydra-based capability mechanisms (I said partly. There may
have been other reasons as well).

Don't forget that the 432 predates a lot of the current research in OOP.

Slow performace of one example != OOP is slow.

Sorry for clogging this group with my ramblings. I've made commentaries on
this (and other) groups in the past, but at least a network like this allows
dialog between the author and the readers. Sure, DDJ has a letters page,
but who knows if they'll publish letters against this article, or if they do
they might let the author give a reply like "Here are these idiot OO people
proving my point..." along with further nonsense. Grrr!

Thomas V. Frauenhofer ...!rutgers!rochester!cci632!ccird7!tvf
*or* ...!rochester!kodak!swamps!frau!tvf *or* ...!attctc!swamps!frau!tvf
"Had coffee with Melnick today. He talked to me about his idea of having all
government officials dress like hens." - Woody Allen

Jonathan Shapiro

unread,
Nov 27, 1989, 11:24:41 AM11/27/89
to
In article <22...@dataio.Data-IO.COM> bri...@dataio.Data-IO.COM (Walter Bright) writes:
>I'll stick to software, which pays better, and the work is easy :-).

Software and BlackJack look a lot alike:

pay_per_hour = (rate_of_win * size_of_bet * bets_per_hr) / #_programmers
Assuming:
rate_of_win = .05 (5%)
size_of_bet = $100,000 (UNIX software source license fee)
bets_per_hr = 10 (How fast can I sell them?)
#_programmers = 1 (to maintain a UNIX...)

gives a pay_per_hour of: $50

>Of course, I don't have a pretty girl plying me with drinks while I program :-)

All things considered, BlackJack is much more fun.

Jon

Jonathan Shapiro

unread,
Nov 27, 1989, 11:34:59 AM11/27/89
to
I do not believe that it is feasible to add continuations to C++ for
any number of reasons, but I would be interested to hear the reactions
in this community regarding their utility in object-oriented programming.

For those of you who are not already conversant with continuations,
the idea comes from Scheme (though it has earlier antecedents), and
looks like this:

(call-with-current-continuation
(lambda (continuation)
(body...)))

the call-with-current-continuation form potentially returns multiple
times, because the continuation can be saved to a global variable. A
subsequent invocation can be done with:

(continuation value-producing-form)

which causes a return from the call-with-current-continuation with
that value.

Logically, the continuation captures the future state of the program
at the point at which it was captured. That is, it captures a pc, a
set of registers, and a stack. Note that this implies garbage
collection of stack frames, because a live frame in an enclosing
context can be shared between a continuation and subsequent code.

Continuations have seen use as error handlers, because they look like
function invocations. One can set up an exception handling routine,
capture it with a continuation, and use the continuation as a
"longjump" type mechanism to get out of a bad state.

Continuations are different from longjump in that they capture the
stack as well as the registers.

A detailed description can be found in the Revised Revised... Report
on scheme, which can be obtained via anonymous FTP from
zurich.ai.mit.edu.

Jonathan Shapiro
Silicon Graphics, Inc.

Ted Theodore (W) Leung

unread,
Nov 27, 1989, 11:11:42 PM11/27/89
to

If you look in Colewell's paper in the Aug 1988 ACM Transactions on
Computer Systems, you'll find an excellent analysis of what really
slowed the 432 down. Mostly, it was a bad compiler, few register, and
narrow buses (these last two features of what the fabrication
technology could bear). If he'd like to beat up on O-O
implementations, he should pick on Dave Ungar's SOAR processor or Self
compiler, to be fair...
--------------------------------------------------------------------
Internet/CSnet: t...@cs.brown.edu | Ted "Theodore" Leung
BITNET: t...@BROWNCS.BITNET | Box 1910, Brown University
UUCP: uunet!brunix!twl | Providence, RI 02912

David S. Masterson

unread,
Nov 28, 1989, 2:14:15 AM11/28/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:

I do not believe that it is feasible to add continuations to C++ for
any number of reasons, but I would be interested to hear the reactions
in this community regarding their utility in object-oriented programming.

Hmmm. Something in this caught my interest, but I'm not quite sure what.
Taking a purely object-oriented stab at it, might the idea of continuations be
implemented as merely messaging a static data area? Is there more to it?
--
===================================================================
David Masterson Consilium, Inc.
uunet!cimshop!davidm Mt. View, CA 94043
===================================================================
"If someone thinks they know what I said, then I didn't say it!"

Dan Weinreb

unread,
Nov 28, 1989, 1:38:16 PM11/28/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:


I do not believe that it is feasible to add continuations to C++ for
any number of reasons, but I would be interested to hear the reactions
in this community regarding their utility in object-oriented programming.

They are just as useful in object-oriented programming as in straight
Lisp or Scheme. However, continuations in the Scheme style are only
useful if full support is provided for lexical scoping. C and C++
have no lexical scoping whatsoever. So I agree that it is not
feasible to add them to C++. Even lexical scoping alone, with or
without continuations, would be a great benefit, but I'm sure I'll
never see it in C or C++.

Peter C. Damron

unread,
Nov 28, 1989, 3:11:10 PM11/28/89
to
In article <1989Nov28.1...@odi.com> d...@odi.com writes:
>In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>
> I do not believe that it is feasible to add continuations to C++ for
> any number of reasons, but I would be interested to hear the reactions
> in this community regarding their utility in object-oriented programming.
>
>They are just as useful in object-oriented programming as in straight
>Lisp or Scheme.

How can you make this claim?

>However, continuations in the Scheme style are only
>useful if full support is provided for lexical scoping. C and C++
>have no lexical scoping whatsoever.

I just had to reply when I saw this. C and C++ are definitely "lexically
scoped" (I would prefer to call it statically scoped).

Peter.

---------------
Peter C. Damron
Dept. of Computer Science, FR-35
University of Washington
Seattle, WA 98195

pet...@cs.washington.edu
{ucbvax,decvax,etc.}!uw-beaver!uw-june!peterd

Chet Murthy

unread,
Nov 28, 1989, 4:06:27 PM11/28/89
to
pet...@cs.washington.edu (Peter C. Damron) writes:


>I just had to reply when I saw this. C and C++ are definitely "lexically
>scoped" (I would prefer to call it statically scoped).

Yes, they are, but they don't provide support for closures as
first-class objects (the funarg problem). So they don't provide
"full" support for lexical scoping. Scheme/ML do.
--chet--

--chet--
mur...@cs.cornell.edu

John Gateley

unread,
Nov 28, 1989, 4:33:24 PM11/28/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>I do not believe that it is feasible to add continuations to C++ for
>any number of reasons, but I would be interested to hear the reactions
>in this community regarding their utility in object-oriented programming.

I don't see why it would not be feasible, C++ has for loops and other
control structures, why not continuations as well?

As for object oriented continuations: A continuation is a technique for
modelling/creating flow of control. In pure object oriented systems, this
is done by message passing. Thus, a OO continuation would probably be something
which modelled a process: like a coroutine object, which you send messages
to (like resume). This is a half baked idea, but maybe worthwhile.

John
gat...@m2.csc.ti.com

Vinod Grover

unread,
Nov 28, 1989, 4:51:49 PM11/28/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>I do not believe that it is feasible to add continuations to C++ for
>any number of reasons, but I would be interested to hear the reactions
>in this community regarding their utility in object-oriented programming.
>

I do not know if you are talking about first-class continuations or not, but
in languages which are stack-based first-class continuations are not trivial
to add. (By stack-based I mean that the procedure entry exit sequance
behaves in a stack-like fashion) In C the idea of setjmp/longjump comes as
close to continuations as one can get. Admittedly, this is not as elegant as
Scheme's notion of continuation but workable in concept.

>
>Jonathan Shapiro
>Silicon Graphics, Inc.

Vinod Grover
Sun Microsystems.

Jonathan Shapiro

unread,
Nov 28, 1989, 5:09:24 PM11/28/89
to
In article <CIMSHOP!DAVIDM.89N...@uunet.UU.NET> cimshop!dav...@uunet.UU.NET (David S. Masterson) writes:
>In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>
> I do not believe that it is feasible to add continuations to C++ for
> any number of reasons, but I would be interested to hear the reactions
> in this community regarding their utility in object-oriented programming.
>
>Hmmm. Something in this caught my interest, but I'm not quite sure what.
>Taking a purely object-oriented stab at it, might the idea of continuations be
>implemented as merely messaging a static data area? Is there more to it?
>--
>David Masterson Consilium, Inc.
>uunet!cimshop!davidm Mt. View, CA 94043

I don't believe so. What is interesting about continuations is that
they freeze an execution context into an object whose internals are
not exposed to the programmer, but which can be invoked at a later
time and passed a single value which is the value to return. Since
the object can be saved outside of the scope in which it was defined,
the continuation can be returned from multiple times.

The reason messaging a static data area isn't enough is that you need
to preserve the stack, and portions of the stack can be shared between
the captured continuation and the main program thread.

As an example:

(if (call-with-current-continuation
(lambda (cont)
(set! *saved-continuation* cont)
#f))
(do-something-interesting))

When first run, the continuation is passed to the lambda, which saves
it and returns #f, causing the IF to fail. At any later time, the
user can say:

(*saved-continuation* non-#f-value)

and the body of the if will be run, along with any code that ran
following the execution of the if the first time. One could therefore
package up an error routine in a top level loop as shown above and
then implement

(define (recover-from error-type cause-info)
(call-with-current-continuation
(lambda (cont)
(function-that-handles-error-type cont cause-info))))

(define (catch-math-error where-happened cause-description)
(if (try-to-recover cause)
(cont #t)
(*saved-continuation-for-unrecoverable-math-error where-happened)))

And then say in the code encountering the error:

(if (recover-from 'math-error 'overflow)
(continue-happily)
(die-horribly))

Note that the "cont" can be invoked by any handler in the chain to get
back to the *original* context, and that if any of these pass #t to
the continuation, execution will continue appropriately. If some
handler gives up and invokes the continuation with #f, we call
DIE-HORRIBLY, and presumably exit.

I don't know if that makes things clearer or not...

Patrick Logan

unread,
Nov 28, 1989, 5:49:37 PM11/28/89
to

Jonathan Shapiro (sh...@delrey.sgi.com) is interested in the utility of
continuations in object-oriented programming.

My response is they have great utility. He mentioned an example of
defining an exception handling system using continuations. If you're
unfamiliar with continuations, consider another, unrelated example and
you may begin to realize the power of continuations.

ASIDE...
Before I continue with the example, I think it's important to point
out that continuations exist conceptually in any programming language
that provides sequential execution. For example C, Pascal, and C++
have sequential statements that can be described using the concept of
continuations. Even Prolog messes with sequential execution with the
concepts of backtracking and cut. The real *power* of continuations
comes with languages such as Scheme that provide continuations as
full-fledged objects in the language.
END OF ASIDE

My example of an object that uses continuations is a discrete event
simulator. Let's say your simulating (very simplistically**) a digital
electronic circuit. You could have an object representing a two-input
AND gate. This object has (among others) a method for setting the
value of an input. Here is some pseudo code describing that method:

When one of the input pins is assigned a new value perform...
(1) Assign the new value to the input pin.
(2) Notify the discrete event simulator that the next step
should execute two simulation time units from the current time.
This models the delay of the real AND gate.
(3) Assign to the output pin the result of AND'ing the input pins.

In a Scheme-based object system this might look like:

(define-method (set-input-pin! and-gate pin-number new-value)
;; Step #1.
(case pin-number
(1 (set! input-pin-one new-value))
(2 (set! input-pin-one new-value))
(else (error "Invalid pin number." pin-number)))
;; Step #2.
(simulator 'wait-for 2)
;; Step #3.
(self 'set-output-pin! (and input-pin-one input-pin-two)))

The wait-for message to the simulator object implements step #2 above and
uses continuations. The pseudo code may be:

When the wait-for message is sent perform...
(1) Capture the current continuation.
(2) Schedule it to be executed at the current time plus the value
sent in the message.
(3) Choose the next scheduled continuation.
(4) Set the current time to be the time of that continuation.
(5) Execute that continuation.

The Scheme-like code may be:

(define-method (wait-for simulator relative-time)
;; Step #1.
(call-with-current-continuation
(lambda (k)
;; Step #2.
(scheduler 'schedule k (+ (self 'current-time) relative-time))
(receive (next-scheduled-k new-time) ;; Bind these two names
(scheduler 'next) ;; to the two values returned by this message. Step #3.
(self 'set-time! new-time) ;; Then set the new time (Step #4.)
(next-scheduled-k #t) ;; and execute the continuation with a meaningless argument. Step #5.

So you can see that continuations are used to schedule future events
in a neat way. They allow control to be handed over to other events
that are scheduled to occur sooner. By embedding the capturing of
continuations in the simulator object, the details are abstracted so
that the description of the model is also very clean.

** This algorithm does not correspond to Mentor Graphics' products! 8)

--
Patrick Logan | ...!{decwrl,sequent,tessi}!mntgfx!plogan
Mentor Graphics Corporation | plo...@pdx.MENTOR.COM
Beaverton, Oregon |

David Chase

unread,
Nov 28, 1989, 6:06:52 PM11/28/89
to
In article <34...@cornell.UUCP> Chet Murthy writes:
>pet...@cs.washington.edu (Peter C. Damron) writes:

>>I just had to reply when I saw this. C and C++ are definitely "lexically
>>scoped" (I would prefer to call it statically scoped).

>Yes, they are, but they don't provide support for closures as
>first-class objects (the funarg problem). So they don't provide
>"full" support for lexical scoping. Scheme/ML do.

Try "lexical scope with nested procedures" for "lexical scope", and I
think you'll fix your misunderstanding. (I got this from the Dragon
Book, 2nd ed., and I once abused the terminology in the same way). C,
Pascal, Modula-3, and Scheme all have lexical scope -- the bindings
can be determined by examining the program text. In addition, all
four use the "most-closely-nested rule" for resolving bindings.

However, the languages listed differ in their treatment of nested
procedures:

C forbids them (no closures).

Pascal allows them, but (in Pascal Classic, at least) forbids
their assignment to variables, use as parameters to
procedures, or use in a RETURN. (no closures, because it
turns out that a display can be used).

Modula-3 allows them, but forbids their assignment to variables or
their use in a RETURN. (closures needed, but must obey
stack lifetime rules). (I hope this example isn't too
obscure -- I'm rather familiar with it at the moment).

Scheme allows them (garbage-collected closures, except where
optimized away).

Now (returning to continuations [pun unintended]), it happens that
many many things can be defined in terms of continuations, including
exception handling. With closures, it is possible to implement some
really interesting exception handlers, but liberal use of closures and
continuations can (read will, with current technology) interfere with
optimization (in particular, it interferes with register allocation --
if you don't map the same variables to the same registers at the
appropriate program points, it gets very very messy, much like setjmp
and longjmp).

One limited use of continuations is actually very much like
setjmp/longjmp (though the syntax is different). In C, it might go
something like (ignore the type-checking, and pretend we have nested
procedures):

typedef any_type (*continuation)();

f(...)
{
int key, x;
tree t;

int search(t,K) tree t; continuation K;
{
/* RETURN immediately via K if found */

if (match(key, t)) K(t);
search(left(t),K);
search(right(t),K);

/* Normal return for failure */
}

tree g(K) continuation K;
{
/* Find key in tree t, returning quickly if found. */
search(t,K);

/* If search returns at all, then it failed to find anything,
so we return NULL */
return NULL;
}
...
x = call_with_curr_continuation(g);
...
}

This should illustrate the (ab)use of continuations in a simple,
quick-return fashion. More sophisticated use of continuations will
store them in global variables and/or return them. This allows the
program to "return to" (a second time) an "inactive" call frame. The
implementation of these is a little more heavyweight, but it does
allow easy descriptions of backtracking algorithms and iterators.
Note the usefulness of lexical scope in this example.

David

bar...@think.com

unread,
Nov 28, 1989, 11:18:25 PM11/28/89
to
In article <128...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:
>I do not know if you are talking about first-class continuations or not, but
>in languages which are stack-based first-class continuations are not trivial
>to add. (By stack-based I mean that the procedure entry exit sequance
>behaves in a stack-like fashion)

This is a circular argument. It's the very existence of first-class
continuations in Scheme-like languages that causes them not to be
stack-based. If Scheme didn't have continuations then it would be
stack-based, according to your definition. Scheme is simply the result of
adding continuations to Lisp. Of course, when you add them you have to
redesign implementations to handle them properly. This means that you
can't always use a linear stack to implement activation records -- you may
have to allocate activations in the heap and garbage collect them.

Upward closures also require similar support. First-class continuations
are simply an extension of upward closures to include control information
as well as variable bindings.
Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Robert Marti

unread,
Nov 29, 1989, 8:27:38 AM11/29/89
to
In article <34...@cornell.UUCP>, mur...@alsvid.cs.cornell.edu (Chet Murthy)
writes:

| pet...@cs.washington.edu (Peter C. Damron) writes:
| > C and C++ are definitely "lexically scoped".

|
| Yes, they are, but they don't provide support for closures as
| first-class objects (the funarg problem). So they don't provide
| "full" support for lexical scoping. Scheme/ML do.

Indeed, C and C++ do not even allow a procedure to be declared local to
another procedure. Maybe nesting is not only for the birds, after all ;-)

--
Robert Marti Phone: +41 1 256 52 36
Institut fur Informationssysteme
ETH-Zentrum CSNET/ARPA: marti%inf.e...@relay.cs.net
CH-8092 Zurich, Switzerland UUCP: ...uunet!mcvax!ethz!marti

Jonathan Shapiro

unread,
Nov 29, 1989, 12:04:47 PM11/29/89
to
In article <31...@news.Think.COM> bar...@think.com writes:
>... Scheme is simply the result of adding continuations to Lisp.
>...Upward closures also require similar support. First-class continuations

>are simply an extension of upward closures to include control information
>as well as variable bindings.

I think you got it right the second time. The major thing that Scheme
offers is first-class closures (of which functions, continuations, and
in some implementations, environments are examples). There are also
some miscellaneous alterations, such as eliminating the namespace
distinction between functions and variables, that are a consequence of
making closures first class.

Jon

Max Hailperin

unread,
Nov 29, 1989, 12:09:29 PM11/29/89
to
In article <1989Nov28....@mentor.com> plo...@mentor.com
(Patrick Logan) writes:
> ...

>My example of an object that uses continuations is a discrete event
>simulator.
> ...


This is a superb example, but stops one step short of showing the real
power of the idea. If all you are simulating is simple things like and
gates, you can just code the simulator the way Abelson and Sussman do
in their book, something like
(simulator 'after-delay
2
(lambda ()
(self 'set-output-pin! (and input-pin-one input-pin-two))))
instead of
(simulator 'wait-for 2)
(self 'set-output-pin! (and input-pin-one input-pin-two))

The win in doing it the latter way (where simulator's wait-for uses
call-with-current-continuation) comes if you are simulating something
higher level. For example, I've been involved in some multiprocessor
simulations where you want to just treat the processing elements as
black boxes and simulate them by directly running the application code
on the real machine that's doing the simulating, time how long it takes,
and apply some scaling factor. *However*, whenever a remote reference
happens, you want to trap back to the simulator, which will cause the
right sequence of events to happen to simulate the network traffic,
etc. Meanwhile, other processors may be doing their thing. Eventually,
the event happens that corresponds to the memory contents from the
remote reference arriving back at the processor, and you wnat to
pick up where you left off -- which may be arbitrarily deeply nested
in procedure calling. This can trivially be done by embedding a
call-with-current-continuation in the remote-reference code, but otherwise
is a real nightmare. I know--I've done it both ways.

The key difference between the processor and the and-gate is that the
point at which you want to delay may be buried deep in calls. This
can happen in other, simpler situations as well, basically any time
you have a notion of "interruption" in the normal, non-trivial,
behavior of an entity. The fact that this comes up all the time in
simulating bussiness organizations or what-not explains why Simula had
coroutining (which is what first-class continuations are being used
for in this example).

Ken Dickey

unread,
Nov 29, 1989, 3:31:35 PM11/29/89
to
>... It's the very existence of first-class

>continuations in Scheme-like languages that causes them not to be
>stack-based. ...

[I saw this and could not help but respond... 8^]

People interested in this discussion might want to look at R. Kent
Dybvig's thesis "Three implementation models for Scheme", U of North
Carolina at Chappel Hill, 1987. It describes heap and *stack* based
scheme implementation techniques. Many (most?) compiled Scheme
implementations do use (possibly segmented) stacks. A good discussion
of implementation techniques can be found in "Implementation
Strategies for Continuations" by Clinger, Hartheimer and Ost in the
'88 ACM Lisp & FP Conference Proceedings.

I guess the gist of the above is that you can use a stack strategy
until a continuation is captured, at which time the used part of the
stack is `captured', marked immutable, and may be shared. If indeed
shared, then it must be copied to be reexecuted--otherwise it could
not be reexecuted multiple times. How the capture occurs depends on
your storage recycling (gc) strategy and may involve copying to the
heap, playing with page tables, etc. There are some technical
wrinkles, but stack strategies are certainly widely used.

Some people now believe that stacks are no longer required (e.g.
Appel, "Garbage Collection can be Faster than Stack Allocation",
Information Processing Letters, v25 #4, 1987), but that is another
question.


-Ken Dickey ke...@mrloog.WR.TEK.COM

Dan Weinreb

unread,
Nov 29, 1989, 5:58:26 PM11/29/89
to
In article <99...@june.cs.washington.edu> pet...@cs.washington.edu (Peter C. Damron) writes:

In article <1989Nov28.1...@odi.com> d...@odi.com writes:
>In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>
> I do not believe that it is feasible to add continuations to C++ for
> any number of reasons, but I would be interested to hear the reactions
> in this community regarding their utility in object-oriented programming.
>
>They are just as useful in object-oriented programming as in straight
>Lisp or Scheme.

How can you make this claim?

Easy, I just press the keys, and... Seriously, I have used
continuations in an object-oriented programming language (Lisp with
Flavors), for years, as have several of my former colleagues. So I
feel pretty qualified to claim that they're useful. Do you have any
more specific rebuttal than amazement that I could even say such a
thing?

>However, continuations in the Scheme style are only
>useful if full support is provided for lexical scoping. C and C++
>have no lexical scoping whatsoever.

I just had to reply when I saw this. C and C++ are definitely "lexically
scoped" (I would prefer to call it statically scoped).

In a sense, C and C++ are lexically scoped, but only in a trivial
sense, since there are no internal procedures and no block structure.
So, you are technically right, and I should revise my statement to say
that continuations are only useful if the continuation can reference
variables of lexically enclosing blocks, and classic
"continuation-passing style" in the Sussman and Steele sense requires
further that they should be able to refer to such variables even if
the enclosing block is no longer being executed (what is traditionally
known in the Lisp/Scheme community as "supporting upward funargs").
Few languages support upward funargs; Common Lisp and Scheme are two
that do. Many languages support downward funargs; Algol-60, Pascal,
and PL/I come to mind. In C, you can pass a function around as an
argument, but there is no "lexical link" allowing it to reference
variables in its parent block; in fact, there are no parent blocks in C.
(It can refer to static and extern, but that's not the same thing.)

Dan Weinreb Object Design, Inc. d...@odi.com

Dan Weinreb

unread,
Nov 29, 1989, 6:03:31 PM11/29/89
to
In article <99...@ti-csl.csc.ti.com> gat...@m2.csc.ti.com (John Gateley) writes:

I don't see why it would not be feasible, C++ has for loops and other
control structures, why not continuations as well?

It depends what you mean. It would certainly be possible to invent a
definition of a language that was like C, but also had the features
needed to do continuation-style programming. Then you'd have a
definition of a new programming language. However, if you're talking
about getting the ANSI X3J11 committee to incorporate such features
into their next draft standard, I think you'd have a lot of trouble,
since the changes would be large and would entail various tradeoffs.

Patrick Logan

unread,
Nov 29, 1989, 7:22:04 PM11/29/89
to

>>From: grover%brah...@Sun.COM (Vinod Grover)

>In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>>I do not believe that it is feasible to add continuations to C++ for
>>any number of reasons...

>
>I do not know if you are talking about first-class continuations or not, but
>in languages which are stack-based first-class continuations are not trivial
>to add. (By stack-based I mean that the procedure entry exit sequance
>behaves in a stack-like fashion) In C the idea of setjmp/longjump comes as
>close to continuations as one can get. Admittedly, this is not as elegant as
>Scheme's notion of continuation but workable in concept.
>
>>
>>Jonathan Shapiro
>>Silicon Graphics, Inc.
>
>Vinod Grover
>Sun Microsystems.

To be more accurate, longjump is not workable in concept because a
longjump can only be executed once per setjmp. A continuation in
Scheme can be captured once and executed zero or more times. Things
become more complicated when you start considering nested setjmps
and longjumps.

For example, in my last article, I gave an example of a discrete event
simulator. There is no way on Earth to implement that with setjmp and
longjump.

Essentially, setjmp/longjump implement a subset of what first-class
continuations implement. There is no way to implement first-class
continuations in C/C++ without fundamentally changing the semantics of
the language. At that point, you have to poke yourself in the eye with
a sharp object and ask yourself why do that?

Have fun.

Paul S. R. Chisholm

unread,
Nov 30, 1989, 1:04:19 AM11/30/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
> I do not believe that it is feasible to add continuations to C++ for
> any number of reasons . . .

In article <99...@ti-csl.csc.ti.com>, gat...@m2.csc.ti.com (John Gateley) writes:
> I don't see why it would not be feasible, C++ has for loops and other
> control structures, why not continuations as well?

>John, gat...@m2.csc.ti.com

"For loops and other control structures" just require conditional
branches. Continuations require something for subroutine calling
more complicated than a simple stack.

Remember, many C++ implementations generate C code. If C++ provides a
feature that can't be implemented in C, that whole method (which has
made it much easier and faster to get object-oriented programming
methods into the hands of programmers) goes down the tubes.

In article <128...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:
> I do not know if you are talking about first-class continuations or not, but
> in languages which are stack-based first-class continuations are not trivial
> to add. (By stack-based I mean that the procedure entry exit sequance
> behaves in a stack-like fashion)

In article <31...@news.Think.COM>, bar...@think.com writes:
> This is a circular argument. It's the very existence of first-class
> continuations in Scheme-like languages that causes them not to be
> stack-based.

>Barry Margolin, Thinking Machines Corp., bar...@think.com,
>{uunet,harvard}!think!barmar

So you're saying that C++ (and by implication, C) would have to abandon
their simple stacks in order to have first-class continuations?

Efficiency (that is, allowing for the straightforward implementation of
translators that produce efficient code) is one of the goals of the C++
programming languages. Even with inline functions, a C++ program will
have lots of subroutine calls. A simple stack helps minimize the
overhead of those calls.

BTW, C++ class iterators either return a magic pointer, which can be
trivially convinced to reference the next element, or separate iterator
classes, which set aside memory (conceptually, and *very* roughly) like
separate stack frames. They're not as elegant as iterators in Icon,
CLU, or Smalltalk. I realize that continuations are useful for a *lot*
more things than iteration.

(Having said all that, I've painted myself into a corner: AT&T's C++
2.0 library includes "tasks", a set of classes for coroutines. The
library reference manual tells how they're implemented, but I've never
read that part. Does anyone know how well the same techniques could be
applied to continuations, first-class or otherwise?)

Paul S. R. Chisholm, AT&T Bell Laboratories
att!pegasus!psrc, ps...@pegasus.att.com, AT&T Mail !psrchisholm
I'm not speaking for the company, I'm just speaking my mind.

m...@cs.rit.edu

unread,
Nov 30, 1989, 8:49:07 AM11/30/89
to
In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>
>... What is interesting about continuations is that

>they freeze an execution context into an object whose internals are
>not exposed to the programmer, but which can be invoked at a later
>time and passed a single value which is the value to return. Since
>the object can be saved outside of the scope in which it was defined,
>the continuation can be returned from multiple times.
>
>The reason messaging a static data area isn't enough is that you need
>to preserve the stack, and portions of the stack can be shared between
>the captured continuation and the main program thread.

What is the essential difference between a continuation in Scheme and
the classic notion of a coroutine? Coroutines have been around for
years (Conway's paper describing them came out in the '60s), and can be
found in Simula-67 (where they are used with the object metaphor to
represent the semi-independent entities in a simulation). The
"processes" of Modula-2 are also ccoroutines. There are several
packages for C that implement "light weight processes", and these are
essentially ccoroutines (I have one available if anyone is
interested). Finally, Stroustrup has a tasking package for C++ which
captures this concept.

This is not to deny that Scheme may have a more elegant, general,and
formal (and thus less ad-hoc) approach, but I think we have to be alert
enough to recognize essential similarities buried under surface
differences. Have I found such a similarity here?

On a side note, aren't lexical scoping and continuations independent
concepts? Why I couldn't define a consistent language with
continuations where the meaning of a variable depends on the dynamic
invocation sequence rather than static nesting? I might not want to
*use* such a language (dynamic scoping makes it almost impossible to
understand a module/function in isolation) but that's an engineering
issue, not one of feasibility.

Mike Lutz
Mike Lutz Rochester Institute of Technology, Rochester NY
UUCP: {rutgers,cornell}!rochester!rit!mjl
INTERNET: mjl...@ultb.isc.rit.edu

bar...@think.com

unread,
Nov 30, 1989, 1:00:34 PM11/30/89
to
In article <14...@cs.rit.edu> m...@prague.UUCP (Michael Lutz) writes:
>What is the essential difference between a continuation in Scheme and
>the classic notion of a coroutine?

I think a coroutine can only have one saved state at a time. Whenever you
invoke a coroutine you return from the operation that last exited the
coroutine. With continuations you can snapshot the state any number of
times, and return to any one of them.

Coroutines also don't permit you to return to functions that have already
performed a normal return. Continuations do.

>On a side note, aren't lexical scoping and continuations independent
>concepts? Why I couldn't define a consistent language with
>continuations where the meaning of a variable depends on the dynamic
>invocation sequence rather than static nesting? I might not want to
>*use* such a language (dynamic scoping makes it almost impossible to
>understand a module/function in isolation) but that's an engineering
>issue, not one of feasibility.

I think you're right. I believe that the relationship between lexical
scoping and continuations is more philosophical than architectural. I.e.,
lexical scoping and continuations are two aspects of a larger idea about
programming, and they work well together. Also, continuation-based
programming can be confusing, and lexical scoping makes it easier to
understand what is going on. You are right that dynamic scoping makes it
hard to understand modules (I was taught that it violates "referential
transparency"), and adding continuations would make it even harder.

bar...@think.com

unread,
Nov 30, 1989, 1:18:06 PM11/30/89
to
In article <42...@pegasus.ATT.COM> ps...@pegasus.ATT.COM (Paul S. R. Chisholm) writes:
>Remember, many C++ implementations generate C code. If C++ provides a
>feature that can't be implemented in C, that whole method (which has
>made it much easier and faster to get object-oriented programming
>methods into the hands of programmers) goes down the tubes.

Anything can be implemented in C (it's Turing complete, or whatever the
term is). The C code may not look like the corresponding C++ code, but who
cares? There are Lisp compilers that generate C code, and they have no
problem dealing with the fact that C doesn't have complex and
infinite-precision numbers or a garbage collector. And on Symbolics Lisp
Machines, the C compiler generates Lisp code. In both directions, there is
no rule that similar concepts must be implemented using the corresponding
features of each language; for instance, C++ activation records don't have
to map directly to C activation records.

>So you're saying that C++ (and by implication, C) would have to abandon
>their simple stacks in order to have first-class continuations?
>
>Efficiency (that is, allowing for the straightforward implementation of
>translators that produce efficient code) is one of the goals of the C++
>programming languages. Even with inline functions, a C++ program will
>have lots of subroutine calls. A simple stack helps minimize the
>overhead of those calls.

It's a common fallacy that continuations impose excessive overhead. First
of all, the overhead is generally only imposed when continuations are
actually used. A program that obeys pure stack discipline should not be
affected. Only when the compiler sees that a continuation is being used
might it have to generate extra code to deal with it. Other people have
pointed out the literature on "segmented stacks", which are alternatives to
heap-allocated activation records. I'm not extremely familiar with Scheme
implementation, but implementors I've talked to have convinced me that the
availability of continuations doesn't prevent efficiency. Many of the
languages that provide continuations and/or upward closures also provide a
garbage-collected heap, so heap-allocation of activation records (when
necessary) is the simplest, and hence frequently initial, technique used by
implementors of such languages; such implementations should not be used as
a benchmark to indict continuations in general.

Peter da Silva

unread,
Nov 30, 1989, 1:22:46 PM11/30/89
to
In article <1989Nov29.2...@odi.com> d...@odi.com writes:
> It depends what you mean. It would certainly be possible to invent a
> definition of a language that was like C, but also had the features
> needed to do continuation-style programming.

There is. It's called C. Almost.

Seriously. All you need to do is have a bit of unportable code that mucks
around in struct jmp_buf. Save the stack between here and there, do a setjmp
to mark the end of the current stack, and then longjmp back to the saved
spot. To continue, restore the saved stack and longjmp back to the mark.
It would be advisable to write this code in assembly language :->.

This would also give you the ability to add coroutines, threads, and all
sorts of other good stuff. Let's call it ++C.
--
`-_-' Peter da Silva <pe...@ficc.uu.net> <pe...@sugar.lonestar.org>.
'U` -------------- +1 713 274 5180.
"The basic notion underlying USENET is the flame."
-- Chuq Von Rospach, ch...@Apple.COM

Peter C. Damron

unread,
Nov 30, 1989, 1:41:17 PM11/30/89
to
>In article <99...@june.cs.washington.edu> pet...@cs.washington.edu (Peter C. Damron) writes:
>>In article <1989Nov28.1...@odi.com> d...@odi.com writes:
>>>In article <16...@odin.SGI.COM> sh...@delrey.sgi.com (Jonathan Shapiro) writes:
>>>>I do not believe that it is feasible to add continuations to C++ for
>>>>any number of reasons, but I would be interested to hear the reactions
>>>>in this community regarding their utility in object-oriented programming.
>>>
>>>They are just as useful in object-oriented programming as in straight
>>>Lisp or Scheme.
>>
>>How can you make this claim?

>Easy, I just press the keys, and... Seriously, I have used
>continuations in an object-oriented programming language (Lisp with
>Flavors), for years, as have several of my former colleagues. So I
>feel pretty qualified to claim that they're useful. Do you have any
>more specific rebuttal than amazement that I could even say such a
>thing?

The problem I have with including continuations into an object oriented
language is a philisophical one. Objects should be self-contained and
hide information. All interactions with objects should be through
the well defined methods of the object. Now, if you add continuations,
there is the potential to pass the continuation outside of the object, thus
creating another entry point into the object that is not a member function.
I see this as an information hiding problem for continuations in OO languages.
Perhaps you can relate why this is not a problem in CLOS.

>>>However, continuations in the Scheme style are only
>>>useful if full support is provided for lexical scoping. C and C++
>>>have no lexical scoping whatsoever.
>>
>>I just had to reply when I saw this. C and C++ are definitely "lexically
>>scoped" (I would prefer to call it statically scoped).

>In a sense, C and C++ are lexically scoped, but only in a trivial
>sense, since there are no internal procedures and no block structure.
>So, you are technically right, and I should revise my statement to say
>that continuations are only useful if the continuation can reference
>variables of lexically enclosing blocks, and classic
>"continuation-passing style" in the Sussman and Steele sense requires
>further that they should be able to refer to such variables even if
>the enclosing block is no longer being executed (what is traditionally
>known in the Lisp/Scheme community as "supporting upward funargs").
>Few languages support upward funargs; Common Lisp and Scheme are two
>that do. Many languages support downward funargs; Algol-60, Pascal,
>and PL/I come to mind. In C, you can pass a function around as an
>argument, but there is no "lexical link" allowing it to reference
>variables in its parent block; in fact, there are no parent blocks in C.
>(It can refer to static and extern, but that's not the same thing.)

Let's get this straight.

1. Lexical (static) scoping is orthogonal to both continuations and
first-class functions. C and C++ are fully lexically scoped langauges.
Scoping has to do with naming -- which value does a name refer to.

2. C and C++ do not allow nested functions, but they do support block
structure. Thus the lexical scoping is not "trivial".

3. Continuations are orthogonal to first-class functions (e.g. languages
that allow both upward and downward funargs) and are orthogonal to
block structure. It is not apparent to me that continuations are
any less useful without block structure (though C and C++ have it anyway).
It is also not apparent to me that continuations are any less useful
without nested functions (though maybe more cumbersome).

4. C and C++ have first class functions. Functions can be passed as
parameters, returned as function values, and assigned to variables.
The reason that C and C++ do not allow nested functions is precisely
so that they can have first-class functions implemented on a stack
(e.g. no closure required). The "parent block" of a function is
simply the global variables.

Hope this helps to clear up any mis-understandings,

Paul Snively

unread,
Nov 30, 1989, 1:50:56 PM11/30/89
to
In article <14...@cs.rit.edu> m...@cs.rit.edu writes:
> What is the essential difference between a continuation in Scheme and
> the classic notion of a coroutine? Coroutines have been around for
> years (Conway's paper describing them came out in the '60s), and can be
> found in Simula-67 (where they are used with the object metaphor to
> represent the semi-independent entities in a simulation). The
> "processes" of Modula-2 are also ccoroutines. There are several
> packages for C that implement "light weight processes", and these are
> essentially ccoroutines (I have one available if anyone is
> interested). Finally, Stroustrup has a tasking package for C++ which
> captures this concept.
>
> This is not to deny that Scheme may have a more elegant, general,and
> formal (and thus less ad-hoc) approach, but I think we have to be alert
> enough to recognize essential similarities buried under surface
> differences. Have I found such a similarity here?

No, but you've come very close. It's trivially easy to implement
coroutines using continuations, but they are not the same thing.

Explaining why continuations are more general than coroutines is a tad
tricky. George Springer and Dan Friedman have a new book out from The MIT
Press, "Scheme and the Art of Programming," that devotes an entire chapter
to continuations, and a section in that chapter to coroutines. I'd advise
looking there for a reasonably full treatment of the subject.

__________________________________________________________________________
Just because I work for Apple Computer, Inc. doesn't mean that they
believe what I believe or vice-versa.
__________________________________________________________________________
C++ -- The language in which only friends can access your private members.
__________________________________________________________________________

James Grandy

unread,
Nov 30, 1989, 3:03:45 PM11/30/89
to
A question: if you fix up Smalltalk blocks so that their state is not
stored with the creator method (that is, so that they are re-entrant,
etc.), would they be classified as continuations?

James Grandy
(401) 862-1610 j...@iris.brown.edu
IRIS Brown University
Box 1946, Providence, RI 02912

Peter da Silva

unread,
Nov 30, 1989, 4:44:37 PM11/30/89
to
Another question: what distinguishes continuations from generators?

Thomas Wang

unread,
Nov 30, 1989, 6:54:48 PM11/30/89
to
My nitpick with continuations is that it does not fit the model of
object programming.

Continuations saves the function context to be able to continue the
function at a later time. The information is tied to the function
activation record.

The object programming style on the other hand suggest that important
data structures should be tied to the object, not the function.

The need for continuations should be lessened if one has some light weight
process implementations at hand.

Actually I am still wondering about how one is supposed to build a
general pupose iterator in a OO language. One can always have
Ad Hoc iterators, but a consistent iterator interface I suspect would
be hard to do.


-Thomas Wang (Mak-Kuro Kurosuke, come on out! If you don't come out,
we'll pull your eyeballs out!
- as heard in Tonari No Totoro )

ttw...@polyslo.calpoly.edu

bar...@think.com

unread,
Nov 30, 1989, 6:58:33 PM11/30/89
to
In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
> Now, if you add continuations,
>there is the potential to pass the continuation outside of the object, thus
>creating another entry point into the object that is not a member function.
>I see this as an information hiding problem for continuations in OO languages.
>Perhaps you can relate why this is not a problem in CLOS.

CLOS makes no attempt to implement information hiding. (slot-value
<object> <slot-name>) may be called from anywhere, not just methods. In
CLOS, methods are just functions that know what the types of their required
arguments are, and they are invoked through generic function dispatching.

Lisp, in general, doesn't restrict the programmer's options for
philosophical reasons. If the programmer wants to pass a lexical closure
out of a method, that's his prerogative.

Jonathan Shapiro

unread,
Nov 30, 1989, 11:26:26 PM11/30/89
to
In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
>4. C and C++ have first class functions. Functions can be passed as
> parameters, returned as function values, and assigned to variables.

>Hope this helps to clear up any mis-understandings,
>Peter.

Actually, Peter, it creates one. C and C++ do not have fist class
functions. Functions cannot be passed as values. Function pointers
can, whcih is a somewhat different thing. To be first class, it must
be possible to treat functions as *values* rather than as *labels*.
For example, the following is a syntax error:

int foo(int);
int bar(int);

foo = bar

If functions were first class, this would be altogether legal.

Otherwise, your posting was correct.

Ted Theodore (W) Leung

unread,
Dec 1, 1989, 12:19:49 PM12/1/89
to
In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
>4. C and C++ have first class functions. Functions can be passed as
> parameters, returned as function values, and assigned to variables.
> The reason that C and C++ do not allow nested functions is precisely
> so that they can have first-class functions implemented on a stack
> (e.g. no closure required). The "parent block" of a function is
> simply the global variables.
>
It seems like it would be useful to make a distinction between "first
class" functions which require their implementation via a closure, and
those which do not. There is a fairly standard trick in the Scheme
community, which uses closures to create objects. The closed over
variables are used to represent the local state of the object, and the
code of the closure is a big case statement which selects the correct
method to be invoked. It seems to me that you cannot use C "first class"
functions to create objects, because they have no closure.....

I do think that C/C++ functions are not first class because it is not
possible to create a function on the fly the way that you can in LISP
or Scheme.

Ted

--------------------------------------------------------------------
Internet/CSnet: t...@cs.brown.edu | Ted "Theodore" Leung
BITNET: t...@BROWNCS.BITNET | Box 1910, Brown University
UUCP: uunet!brunix!twl | Providence, RI 02912

Allen Leung

unread,
Dec 1, 1989, 2:59:01 PM12/1/89
to
In article <10...@june.cs.washington.edu>, pet...@cs.washington.edu (Peter C. Damron) writes:
> 4. C and C++ have first class functions. Functions can be passed as
> parameters, returned as function values, and assigned to variables.
> The reason that C and C++ do not allow nested functions is precisely
> so that they can have first-class functions implemented on a stack
> (e.g. no closure required). The "parent block" of a function is
> simply the global variables.
>
> Peter.
> pet...@cs.washington.edu
> {ucbvax,decvax,etc.}!uw-beaver!uw-june!peterd


I must disagree.
C and C++ definitely *don't* have first class functions. From
a function programming view point, type alpha is first class means that
values of alpha enjoy all the privilages granted to first class types.
In terms of C this means you are supposed to be able to do things like
this with functions:

int(*)() foo( int y; )
{
int x; // if you can local declare int/float/* ....etc
float r;
char *a;

int bar( int z ) // you should be able to local declare bar
{ return y + z; } // y is free in bar but bound in foo

return bar;
}

--Allen
al...@sbcs.sunysb.edu

Max Hailperin

unread,
Dec 1, 1989, 3:58:17 PM12/1/89
to
In article <21...@brunix.UUCP> t...@boojum.UUCP (Ted "Theodore" (W) Leung)
writes:

>In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter
>C. Damron) writes:
>>4. C and C++ have first class functions.
>> ... no closure required ...

>It seems like it would be useful to make a distinction between "first
>class" functions which require their implementation via a closure, and
>those which do not.

As an asside to this discussion, it appears to me that closures can be
emulated relatively smoothely in C++. The two key features that support
this are:
1) The procedure-call operator can be overloaded, so that you can define
a class of closure objects which can be called as if they were procedures.
The constructor (lambda equivalent) can copy the closed-over variables
into slots with the same name, so that then the procedure-call member
function can use them.
2) The notion of a reference type (explicit aliasing) allows the closures
to share mutable variables that are in a common scope.

Of course, this is still a bit messy, but nothing compared with
emuating closures in C. Also, there are other language features that
interact poorly with a higher-order style, limiting the utility.

I haven't actually used this technique (I'm lucky enough to have a
real lisp, instead of having to make C++ pretend it's one). I'd be
very interested to hear from anyone who has.

Dan Weinreb

unread,
Dec 1, 1989, 6:10:07 PM12/1/89
to
In article <10...@june.cs.washington.edu> pet...@cs.washington.edu (Peter C. Damron) writes:


The problem I have with including continuations into an object oriented
language is a philisophical one. Objects should be self-contained and
hide information. All interactions with objects should be through
the well defined methods of the object. Now, if you add continuations,
there is the potential to pass the continuation outside of the object, thus
creating another entry point into the object that is not a member function.
I see this as an information hiding problem for continuations in OO languages.
Perhaps you can relate why this is not a problem in CLOS.

Well, I admit that this is all rather hard to articulate. I see
continuations as a control structure concept. It is part of the way
you tell your computer program what to do next, how the locus of
control should move throughout the lines of code. As such, it is
really orthogonal to the important concepts of object-oriented
programming: you can have loops in OOPS, and you can have recursion in
OOPS, so why not coroutines and continuations?

In this specific case, no, I don't think it creates a new
"entrypoint", although this statement is based on a concept of
"entrypoint" that I'm having trouble articulating. Part of the
important point is that encapsulation is not violated, because only a
method of a class can create a continuation that continues within that
class. I hope that explains what I mean; sorry I can't do better.

Let's get this straight.

Yes, I used the terminology in an inexact (wrong) way, as I said in
recent postings. I apologize. I meant to say that C and C++ do not
have first-class nested functions that are block structured with
respect to their containing functions.

3. Continuations are orthogonal to first-class functions (e.g. languages
that allow both upward and downward funargs) and are orthogonal to
block structure.

Hmm. The only continuation-passing-style programs that I've ever seen
certainly spent a lot of time dealing with first-class objects that
were upward and downward funargs. Perhaps you could sketch out an
extension to C that would provide continuations, without providing
such functional objects, and then I'd see what you're getting at.

Dan Weinreb

unread,
Dec 1, 1989, 6:22:55 PM12/1/89
to
In article <71...@ficc.uu.net> pe...@ficc.uu.net (Peter da Silva) writes:

In article <1989Nov29.2...@odi.com> d...@odi.com writes:
> It depends what you mean. It would certainly be possible to invent a
> definition of a language that was like C, but also had the features
> needed to do continuation-style programming.

There is. It's called C. Almost.

Seriously. All you need to do is have a bit of unportable code that mucks
around in struct jmp_buf. Save the stack between here and there, do a setjmp
to mark the end of the current stack, and then longjmp back to the saved
spot. To continue, restore the saved stack and longjmp back to the mark.
It would be advisable to write this code in assembly language :->.

This would also give you the ability to add coroutines, threads, and all
sorts of other good stuff. Let's call it ++C.

But it would not give me the ability to write programming that are in
what is traditionally (at the MIT AI lab -- OK, I'm provincial, but
that's where I was trained) referred to as continuation-passing
style". When all the interesting papers about "continuation-passing
style" were coming out at MIT in the late '70s, they showed a lot of
very interesting techniques and things that you could do with
continuations. But these techniques also employed funargs (both
kinds). It's true that you could add the control structure all alone,
and call what you get "continuation-passing style", but it would not
be the style to which I had become accustomed. Obviously my posting
was unclear because I had in mind this particular specialized meaning
of the phrase "continuation-passing style", and I didn't explain what
I meant. Sorry about that.

Vinod Grover

unread,
Dec 1, 1989, 11:15:36 PM12/1/89
to
>In article <128...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:
>>I do not know if you are talking about first-class continuations or not, but
>>in languages which are stack-based first-class continuations are not trivial
>>to add. (By stack-based I mean that the procedure entry exit sequance
>>behaves in a stack-like fashion)
>
>This is a circular argument. It's the very existence of first-class
>continuations in Scheme-like languages that causes them not to be
>stack-based. If Scheme didn't have continuations then it would be
>stack-based, according to your definition.
One can also argue that it is the existence of lexical scoping, and
higher-order functions that cause a language not to be stack based. In this
sense, continuations are simply functions whose environment may be embedded
in other functions.

Vinod Grover
Sun Microsystems

Mikael Pettersson

unread,
Dec 2, 1989, 1:12:35 AM12/2/89
to
In article <1989Dec1.2...@Neon.Stanford.EDU> m...@sumex-aim.Stanford.EDU (Max Hailperin) writes:
>As an asside to this discussion, it appears to me that closures can be
>emulated relatively smoothely in C++.
> [...]

>I haven't actually used this technique (I'm lucky enough to have a
>real lisp, instead of having to make C++ pretend it's one). I'd be
>very interested to hear from anyone who has.


I have. The idea is this: whenever you want to create a closure with
type T, body B and (list of) free variables FV, you derive from an
abstract base class `funcT' a class `foo_funcT' with instance variables
`FV' and whose operator() does `B' when invoked. Something like this:

(define (bar L)
(let ((i 1))
(for-each (lambda (item)
(princ i)(princ ":")(print item)
(set! i (1+ i)))
L)))

(assuming a left-to-right calling order in for-each, this should print
each item in the list L preceeded by its position in the list).
The C++ emulation could look like:


class funcT { // say: Obj->void
public:
virtual void operator()(Obj);
};
class ObjList {
...
friend void for_each(funcT&, ObjList);
...
};
...
class foo_funcT : public funcT {
int& i;
public:
foo_funcT(int& j) : i(j) {}
void operator()(Obj);
}
void foo_funcT::operator()(Obj item) {
cout << i << ":" << item << "\n";
++i;
}
void bar(ObjList L) {
int i = 1;
foo_funcT fun(i);
for_each(fun, L);
}

This technique handles downwards funargs quite nicely, upwards funargs
require explicit heap allocation.
--
Mikael Pettersson, Dept of Comp & Info Sci, University of Linkoping, Sweden
email: m...@ida.liu.se or ...!{mcsun,munnari,uunet,unido,...}!sunic!liuida!mpe

Ken Dickey

unread,
Dec 2, 1989, 6:51:41 PM12/2/89
to
Sorry, you got me in a bad mood. FLAME ON!

In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
>

>The problem I have with including continuations into an object oriented
>language is a philisophical one. Objects should be self-contained and
>hide information. All interactions with objects should be through
>the well defined methods of the object. Now, if you add continuations,
>there is the potential to pass the continuation outside of the object, thus
>creating another entry point into the object that is not a member function.
>I see this as an information hiding problem for continuations in OO languages.

So Smalltalk is not object oriented because it has blocks and returns
objects (remember that may be internal to/components of other
objects)? How do you implement backtracking objects? Having the
ability to deal with control flow in well structures ways is very
helpful. It seems that you are trying to push everything through an
OO keyhole. Functional interfaces are also well defined. In
languages such as Scheme, you can program in functional, imparative,
OO, etc. styles as appropriate to the solutions you are trying to
structure. There are real problems in trying to use a single paradigm
to cover all solutions. Some just don't fit.

>Let's get this straight.

>...


>2. C and C++ do not allow nested functions, but they do support block
> structure. Thus the lexical scoping is not "trivial".

In C you get a local/global namespace. You don't have name scoping in
blocks nested more than 1 deep. That's pretty trivial.

>3. Continuations are orthogonal to first-class functions (e.g. languages
> that allow both upward and downward funargs) and are orthogonal to
> block structure. It is not apparent to me that continuations are
> any less useful without block structure (though C and C++ have it anyway).
> It is also not apparent to me that continuations are any less useful
> without nested functions (though maybe more cumbersome).

I suggest that you look at some of the Scheme literature for examples
(e.g. Abelson & Sussman: "Structure and Interpretation of Computer
Programs", MIT Press/McGraw-Hill, 1985).

>4. C and C++ have first class functions. Functions can be passed as
> parameters, returned as function values, and assigned to variables.

They cannot be unnamed. They cannot be parameterized. They are *not*
first class.

Perhaps a concrete example would help:

(define (curried-add x)
(lambda (y) (+ x y)) ) ; returns an unnamed function

(define add3 (curried-add 3)) ; function is parameterized & bound
; to a variable.

(add3 5) --> 8 ; usage: add 5 and 3 to get 8


> The reason that C and C++ do not allow nested functions is precisely
> so that they can have first-class functions implemented on a stack
> (e.g. no closure required). The "parent block" of a function is
> simply the global variables.

Here you have something. To quote from David Kranz's thesis ("ORBIT,
an optimizing Compiler for Scheme", Yale U, 1988; p. 9):
The important thing to realize is that Pascal and C are really just
restricted subsets of Scheme, in particular those parts of Scheme
which can be implemented efficiently using conventional compiler
technology!

You have to use non-standard compiler/runtime technology to compile
Scheme efficiently. However, Scheme is a much smaller, more powerful
language than C. When C++ becomes a language (when it is described in
a single document and has a well defined semantics) Scheme will be
smaller than C++ as well [the Scheme language report is ~50 pages,
including a formal semantics].

END-FLAME!

-Ken

Hans Boehm

unread,
Dec 3, 1989, 12:36:40 PM12/3/89
to
In article <51...@tekcrl.LABS.TEK.COM> ke...@mrloog.WR.TEK.COM (Ken Dickey) writes:

>In C you get a local/global namespace. You don't have name scoping in
>blocks nested more than 1 deep. That's pretty trivial.

What about:

int i;
int f()
{
int i;
...
{
int i;
...
}
}

I won't argue about whether this is trivial, but there are certainly nested
scopes.

>
>>3. Continuations are orthogonal to first-class functions (e.g. languages
>> that allow both upward and downward funargs) and are orthogonal to
>> block structure. It is not apparent to me that continuations are
>> any less useful without block structure (though C and C++ have it anyway).
>> It is also not apparent to me that continuations are any less useful
>> without nested functions (though maybe more cumbersome).
>
>I suggest that you look at some of the Scheme literature for examples
>(e.g. Abelson & Sussman: "Structure and Interpretation of Computer
>Programs", MIT Press/McGraw-Hill, 1985).

A more specific argument would help here. I am reasonably familiar with
the Scheme literature, and it seems to me that most of the standard
examples involving continuations translate reasonably easily into a
world without closures, though they may be much less clean. Call/cc is
probably not quite the right primitive. An interface closer to
setjmp/longjmp (but without the restriction on the caller to setjmp
exiting, and with a cleaner distinction of what's saved, and what isn't)
probably makes more sense. This would certainly be adequate for
implementing coroutines and the like. It would be similar to
Call/cc(lambda x . (jmpbuf := x; 0)) in Scheme-like languages.

Note that unlike in Scheme, there is no analog to "continuation-
-passing-style" in C/C++. But this is orthogonal to the above
discussion.

>
>>4. C and C++ have first class functions. Functions can be passed as
>> parameters, returned as function values, and assigned to variables.
>
>They cannot be unnamed. They cannot be parameterized. They are *not*
>first class.

I'm not sure what "They cannot be parametrized" means. For most
interpretations of the sentence, it's wrong.

I would be among the first to argue that languages should provide real
higher-order functions. However, it also seems clear that this requires
garbage collection of one form or another. Furthermore it removes
some control over allocation behavior from the programmer. Both of
these in turn cause some difficulty with programs operating under severe
real-time constraints. C and C++ chose not to go this route. Given that
they didn't, I know of no way to implement continuations efficiently,
while sticking to the design philsophy of either language. Thus it doesn't
make sense to me to add first-class continuations to C++. But this
argument has nothing to do with their utility to the programmer.

Hans-J. Boehm
bo...@rice.edu

Allen Leung @ SUNY at Stony Brook, L.I., NY

unread,
Dec 3, 1989, 3:00:51 PM12/3/89
to
In article <33...@brazos.Rice.edu>, bo...@flora.rice.edu (Hans Boehm) writes:
> In article <51...@tekcrl.LABS.TEK.COM> ke...@mrloog.WR.TEK.COM (Ken Dickey) writes:
>
> >In C you get a local/global namespace. You don't have name scoping in
> >blocks nested more than 1 deep. That's pretty trivial.
> What about:
>
> int i;
> int f()
> {
> int i;
> ...
> {
> int i;
> ...
> }
> }
>
> I won't argue about whether this is trivial, but there are certainly nested
> scopes.

True, C has nested scopes, but only with non-functions. We certainly
can't write

int i;
int f()
{
int i;

int g()
{
... i ...
};
...
}

When we disallow local declaration of functions, I think all the sub-blocks
can be lifted up to the main block with proper renaming of bound variables
( If you treat sub-blocks as lambda expression with a throw-away parameter,
then they can always be beta-reduced at compile time. )
C with nested scopes actually has no more power( not the Turing Machine sense)
than one without, even though its more convenient.


> A more specific argument would help here. I am reasonably familiar with
> the Scheme literature, and it seems to me that most of the standard
> examples involving continuations translate reasonably easily into a
> world without closures, though they may be much less clean. Call/cc is
> probably not quite the right primitive. An interface closer to
> setjmp/longjmp (but without the restriction on the caller to setjmp
> exiting, and with a cleaner distinction of what's saved, and what isn't)
> probably makes more sense. This would certainly be adequate for
> implementing coroutines and the like. It would be similar to
> Call/cc(lambda x . (jmpbuf := x; 0)) in Scheme-like languages.
>
> Note that unlike in Scheme, there is no analog to "continuation-
> -passing-style" in C/C++. But this is orthogonal to the above
> discussion.
>
> >
> >>4. C and C++ have first class functions. Functions can be passed as
> >> parameters, returned as function values, and assigned to variables.
> >
> >They cannot be unnamed. They cannot be parameterized. They are *not*
> >first class.
>
> I'm not sure what "They cannot be parametrized" means. For most
> interpretations of the sentence, it's wrong.

Maybe the original author means that C functions cannot reference non-global
free variables.
In any case, I'm extremely doubtful that continuation in a
language with *side-effects* can be translated easily into a language
without the concept of a closure. Since a continuation is essentially a
closure( entrypoint + environment ).
A language with side effects complicates the implementation of
closure/continuation tremendously: A variable must reside in
one place, either on the stack or in the heap; any duplication
will make the variable's update non-trivial.
I'm not familiar with Scheme, but I guess it only *solves* this
problem by allocation the stack frame of a possibly 'closurised'( you know
what I mean ) code on the heap, similar to what most lisp do with a upward
funarg( or is it downward? I can never remember which is which.)

A language that supports continuation must/should support
first-class/higher order functions. They have about the same complexity.
I believe we can simulate continuation with higher order functions
easily enough. A continuation is just a function that never returns;
that "never returns" part have some problem related to stack allocation,
of course.

al...@sbcs.sunysb.edu

Piercarlo Grandi

unread,
Dec 4, 1989, 10:40:45 AM12/4/89
to
It has been my long held belief that really objects and closures are much
the same thing, to a certain extent. In many ways you can simulate closures
with objects (especially if you have suspend/resume) and objects with
closures.

My reckoning has always been that the most powerful possible entity is a
triple (dictionary, code, saved state), where each member of the triple may
be possibly null. If you also have the three steps for procedure invocation
directly available, like in SL5, then you can do wonderful things, and even
more incredible ones if you can have code executed at compile time (e.g.
like in Lisp, or in the "supercompiler" concept). The BETA language has
"patterns" that are really something like this.

As to more mundane things, under certain respects most current popular OO
languages are a step backwards from Simula 67, e.g. detach/suspend/resume,
inner, and block prefixing, etc...
--
Piercarlo "Peter" Grandi | ARPA: pcg%cs.abe...@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@cs.aber.ac.uk

Vinod Grover

unread,
Dec 4, 1989, 5:34:49 PM12/4/89
to
In article <1989Nov30....@mentor.com> plo...@mentor.com (Patrick Logan) writes:

>
> >>From: grover%brah...@Sun.COM (Vinod Grover)
> > Admittedly, this is not as elegant as
> >Scheme's notion of continuation but workable in concept.
> >
>To be more accurate, longjump is not workable in concept because a
>longjump can only be executed once per setjmp. A continuation in
>Scheme can be captured once and executed zero or more times. Things
>become more complicated when you start considering nested setjmps
>and longjumps.
Yes, you are quite right. In fact the problem is even deeper than that, as I
just learned, if the function calling setjmp returns before the call to
longjmp is made then the longjmp is made to a non-existent environment.
Thanks for correcting me.

Vinod Grover
Sun Microsystems.

Joshua Stein

unread,
Dec 5, 1989, 10:37:00 AM12/5/89
to
Can someone post a non-implementation dependent definition of continuations;
what they are, what they do, and how they fit into the object-oriented paradigm.
--
*******************************************************************************
Joshua Stein/Pacific*Bell/415 823-2411 |"I make it a rule to never get involved
the usual generic disclaimer goes here |with somone who's possessed ... well,
uucp:{world}!pacbell!pbhyf!jste |it's more of a guideline than a rule"

Peter da Silva

unread,
Dec 5, 1989, 12:34:14 PM12/5/89
to
In article <1989Nov30....@mentor.com> plo...@mentor.com (Patrick Logan) writes:
> To be more accurate, longjump is not workable in concept because a
> longjump can only be executed once per setjmp.

What if you (as I suggested in another post) save and restore the whole
stack?
--
`-_-' Peter da Silva. +1 713 274 5180. <pe...@ficc.uu.net>.
'U` Also <pe...@ficc.lonestar.org> or <pe...@sugar.lonestar.org>.

"If you want PL/I, you know where to find it." -- Dennis

Dave Jones

unread,
Dec 5, 1989, 10:13:03 PM12/5/89
to
From article <17...@odin.SGI.COM>, by sh...@delrey.sgi.com (Jonathan Shapiro):

> In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
>>4. C and C++ have first class functions. Functions can be passed as
>> parameters, returned as function values, and assigned to variables.
>
>>Hope this helps to clear up any mis-understandings,
>>Peter.
>
> Actually, Peter, it creates one.

Oh, gimme a break. I think the only misunderstanding is in the definition
of the value-charged adjectival phrase, "first class". If by "first class",
you mean "variable", then C's function-names are not "first class", but that
certainly does not make them second or third class! For most purposes,
constant-names for functions are to be prefered over function-valued variables.

> C and C++ do not have fist class
> functions. Functions cannot be passed as values. Function pointers
> can, whcih is a somewhat different thing. To be first class, it must
> be possible to treat functions as *values* rather than as *labels*.

What is a function's "value", if not a label? What other candidates
are there? The only issue here is whether the name always refers to the
same function: Is the name effectively a label, or does it refer to a
variable which can contain any of a number of different labels? That is
the choice.

There are benefits to constant function-names. I can recall some awful,
XDR-related debugging sessions, struggling to figure out whether this or
that function-variable was bound to this or that procedure. It's best to
use constants when possible, reserving the use of variables for the
few occasions when they are really needed. Besides, on most machines, the
procedure call will be quicker if the name is a constant, resolved by
the linker, rather than at runtime.

> For example, the following is a syntax error:
>
> int foo(int);
> int bar(int);
>
> foo = bar
>
> If functions were first class, this would be altogether legal.
>

Then we would have to add something like ...

constant int foo(int);

... to recover the lost functionality. I would much prefer that
function-names be constants in the default case.

Here's how you can have it both ways, constants and variables, in
plain old K&R C.

extern int bar(); /* "bar" names an external constant. */
int (*foo)(); /* "foo" names a variable. */

void foozle()
{
foo = bar; /* What could be simpler? */
}

Nick Nussbaum

unread,
Dec 6, 1989, 3:28:19 PM12/6/89
to

Those interested in continuations in C might want to look
at a paper by Thomas Bruel which I believe was presented at the
Usenix 88 conference. He proposed implementing closures by
generating pointer to some automatically generated code which sets
up the environment and then jumps into the function. Thus closures
can be used in the same way as vanilla function pointers. He also
discusses the desirablility of nested functions.

--
Nick Nussbaum PO 68 - MIT Branch
n...@rice-chex.ai.mit.edu Cambridge, MA 02139

Ken Dickey

unread,
Dec 6, 1989, 4:10:49 PM12/6/89
to
In article <33...@brazos.Rice.edu> bo...@flora.rice.edu (Hans Boehm) writes:
>In article <51...@tekcrl.LABS.TEK.COM> ke...@mrloog.WR.TEK.COM (Ken Dickey) writes:
>
>>In C you get a local/global namespace. You don't have name scoping in
>>blocks nested more than 1 deep. That's pretty trivial.
>What about:
>
>int i;
>int f()
>{
> int i;
> ...
> {
> int i;
> ...
> }
>}
>
>I won't argue about whether this is trivial, but there are certainly nested
>scopes.

Try it with functions.

>... I am reasonably familiar with


>the Scheme literature, and it seems to me that most of the standard
>examples involving continuations translate reasonably easily into a
>world without closures, though they may be much less clean. Call/cc is
>probably not quite the right primitive. An interface closer to
>setjmp/longjmp (but without the restriction on the caller to setjmp
>exiting, and with a cleaner distinction of what's saved, and what isn't)
>probably makes more sense. This would certainly be adequate for
>implementing coroutines and the like. It would be similar to
>Call/cc(lambda x . (jmpbuf := x; 0)) in Scheme-like languages.

The literature I was thinking of is on the order of:

Wand, Mitchell: "Continuation-Based Program Transformation Strategies"
JACM V 27, #1, January 1980

Friedman, Haynes, & Kohlbecker: "Programming with Continuations"
in "Program Transformations and Programming Environments"
Ed: P. Pepper
Springer-Verlag 1984

Haynes, Friedman, & Wand: "Obtaining Coroutines with Continuations"
Computer Languages: V11, #3/4 1986

Haynes, Friedman, & Wand: "Continuations & Coroutines"
1984 Symposium on Lisp and Functional Programming

Friedman, Haynes: "Constraining Control"
Proceedings 12th Annual Symposium on
Principles Of Programming Languages (ACM), January 1985

Wand: "Continuation-Based Multiprocessing"
Proceedings of the 1980 Lisp Conference

Felleisen, Friedman, Kohlbecker, & Duba: "Reasoning with Continuations"
Proceedings of the Symposium on Logic in Computer Science, IEEE Press, 1986

Haynes & Friedman: "Engines Build Process Abstractions"
1984 Symposium on Lisp and Functional Programming

Haynes & Friedman: "Abstracting Timed Preemption with Engines"
Computer Languages, V20, #2, 1987 (pub Great Britain).

Clinger, Will: "The Scheme Environment: Continuations"
Lisp Pointers, V1, #2, June-July 1987


I guess I don't quite see what you are looking for here. With a timer
interrupt, you can implement multitasking. Other applications [8^] of
continuations are non-blind backtracking, dynamic-wind, reasonable
error recovery (correct & reexecute), etc. Yes, you can do these
things in the os/runtime environment (outside of the C language).
Yes, you can do this in less clean ways. Yes, call/cc is not
necessarily intuitive to think about. Why do you want something less
clean? How do you `reexecute' code in sane ways without a segmented
stack?

>>>4. C and C++ have first class functions. Functions can be passed as
>>> parameters, returned as function values, and assigned to variables.
>>
>>They cannot be unnamed. They cannot be parameterized. They are *not*
>>first class.
>
>I'm not sure what "They cannot be parametrized" means. For most
>interpretations of the sentence, it's wrong.

I am thinking of parameterization by currying. With higher order
functions, you create families of functions parameterized/specialized
for particular tasks. E.g. a function which does numerical
integration becomes a specialized integrator by being parameterized
with a particular function and can then be used to integrate various
intervals. A generic device interface becomes specialized for a
particular hardware environment by being parameterized by a particular
HW device driver. Et cetera. Sorry for the confusion.

>
>I would be among the first to argue that languages should provide real
>higher-order functions. However, it also seems clear that this requires
>garbage collection of one form or another.

I have yet to see an open system without a memory allocator.
Languages without a `pointer' abstraction save a certain class of
errors (i.e. one does not dereference bogus pointers). Dealing with
*values* and having the language implementation deal with what fits in
a register and what does not can save significantly in implementation
time.

I do not view automatic storage management as a bad thing. The "GC in
an uncooperative environment" people observed that even with partial
collection of C code, they were able to take care of storage leaks
which they found in almost all large C programs they looked at. Using
more memory to get faster development may buy market share where
time-to-market is important.

>... Furthermore it removes


>some control over allocation behavior from the programmer.

I don't fully agree with this. Don't use CONS. Allocate data
statically. Parameterize your functionals at compile time. Demand and
use non-cons runtime services.

> .. Both of


>these in turn cause some difficulty with programs operating under severe
>real-time constraints. C and C++ chose not to go this route.

Again, I don't fully agree. It depends on how close you are to the
limits. Some C implementations have allocators which don't allocate
in constant time (e.g. best fit) and fail in the real-time area. Some
highly constrained RT systems are coded in assembler because C is too
slow. Some real time applications have enough margin to collect
adequately if coded in low-cons style. The Appel-Ellis-Li real-time
collector (e.g. with a 68030) looks very interesting. I think that
the general picture is unclear enough here that it is difficult to
make a blanket statement about real-time systems in general.

> ... Given that

>they didn't, I know of no way to implement continuations efficiently,
>while sticking to the design philsophy of either language. Thus it doesn't
>make sense to me to add first-class continuations to C++. But this
>argument has nothing to do with their utility to the programmer.
>
>Hans-J. Boehm
>bo...@rice.edu

Questions of design philosophy aside, it is interesting to look more
closely at what `efficiency' is being asked for.

The ability to mix compiled and interpreted code gives a very short
design-code-test loop. Not dereferincing bogus pointers eliminates a
class of programming errors. *Efficiency in bringing products rapidly
to market.

Having first class functions help encapsulation and can lead to
smaller, cleaner code and more code reuse--which means lower
maintainance costs. This means more production efficiency because
engineers can be building new product rather than doing maintenance.
*Efficiency in product production; less rework, less spoilage.

I guess if you are not building production code, you don't care about
time-to-build/time-to-market. But you probably aren't worried about
hard real-time performance either.

=====

Having gone this far, I guess I should make a philosoply statement and
do some language bashing so people really have something to flame
about. [ Don't get too excited; but I do like a good discussion! ]

I think the C is a great language for writing device drivers. When it
comes to do doing large systems (>200-500 KLOC), system complexity
becomes a larger dragon than execution speed. C has won because of
its simple execution model, closeness to assembler, and Un*x usage.
In C++, the simple execution model is lost [An assignment statement
may copy data 3 times. Strings may be reference counted within the
parameter passing mechanism. Arrays may not be allocated until first
use. Et cetera.] I know a number of people programming in C++ and
none who is happy with it. Using a very complex artifact [a Language
has a single defining document and a well understood semantics] like
C++ does not help in minimizing system complexity.

Go ahead! Call me a language bigot! I deserve it! I have also written
50-100 KLOC each of Pascal and C code, some Smalltalk, APL, FP,
Fortran, and a fair amount of Scheme code. I find that Scheme gets in
my way the least. Having said that, I think that implementation
technologies only make sense in the context of implementations. In
some contexts, the language of choice may be C, Basic, Beta, Scheme,
Ada, assembler, etc. I go for the best technology I can to solve
problems in a given context. I think that the important thing is to
know more than 1 technology so that one knows what reasonable choices
exist and how to evaluate what is reasonable.

[Please, flames to /dev/null -- I freely admit, I don't know it all].

[Non-continuation follow-ups probably belong somewhere other than
comp.object]

-Ken

Patrick C Beard

unread,
Dec 7, 1989, 7:38:15 PM12/7/89
to
In article <51...@tekcrl.LABS.TEK.COM> ke...@mrloog.WR.TEK.COM (Ken Dickey) writes:
>In article <10...@june.cs.washington.edu> pet...@june.cs.washington.edu (Peter C. Damron) writes:
>>Let's get this straight.
>>...
>>2. C and C++ do not allow nested functions, but they do support block
>> structure. Thus the lexical scoping is not "trivial".
>
>In C you get a local/global namespace. You don't have name scoping in
>blocks nested more than 1 deep. That's pretty trivial.

Not true. In C you have a global namespace, a struct namespace, and multiple
nested local namespaces. For example:

int x; /* in global namespace. */

struct foo {
int x; /* struct namespace. */
};

routine()
{
int x; /* first local namespace. */

{
int x; /* first of many possible nested local namespaces. */
/* you can nest as deep as you like. */
...
}
}

At every new scope, the x's in the outer scopes are hidden. Why does this
constitute "trivial" name scoping?

Sorry if this response was long winded. Let's quit quibbling over
legitimate language differences. Some languages are good for one thing,
some another, none are suitable for every programming task.


-------------------------------------------------------------------------------
- Patrick Beard, Macintosh Programmer (be...@lbl.gov) -
- Berkeley Systems, Inc. ".......<dead air>.......Good day!" - Paul Harvey -
-------------------------------------------------------------------------------

David Chase

unread,
Dec 7, 1989, 8:28:23 PM12/7/89
to
In article <53...@rice-chex.ai.mit.edu> n...@rice-chex.WISC.EDU (Nick Nussbaum) writes:
> Those interested in continuations in C might want to look
>at a paper by Thomas Bruel which I believe was presented at the
>Usenix 88 conference. He proposed implementing closures by
>generating pointer to some automatically generated code which sets
>up the environment and then jumps into the function. Thus closures
>can be used in the same way as vanilla function pointers.

It's a neat trick (Stallman put me in touch with Thomas Breuel, who
described it to me), and I even used it for a while in a Modula-3
compiler. Most unfortunately, life gets much more complicated on
architectures/OSes that separate instructions and data (by page
protection, or in potentially inconsistent caches). Some calling
conventions also make this difficult -- it's dead easy on a Sun 1, 2,
or 3, less easy on a SPARC, and I'm not sure that it's tractable at
all on a MIPS, i860, or 88000. Bummer, that. It was great fun while
it lasted.

David

Ken Dickey

unread,
Dec 10, 1989, 9:52:12 PM12/10/89
to
In article <65...@pbhyf.PacBell.COM> js...@PacBell.COM (Joshua Stein) writes:
>Can someone post a non-implementation dependent definition of continuations;
>what they are, what they do, and how they fit into the object-oriented
>paradigm.

A continuation represents the default future for a computation.
Continuations are used to describe generalized `jumps' in Denotational
Semantics {see refs, below}. One way to think of continuations is that
functions never return, but hand their results to an extra argument, the
continuation, which consumes the result. Let's say that there is a magic
way to make procedures out of arbitrary chunkf of code in C. Then we might
be able to do something like the following:

int gcd(int a, int b) {
if (b == 0) {
return( a ) ;
} else {
return( gcd( b, remainder( a, b ) ) ) ;
} /* Boy, what a lot of parens! :^) */
}

could be transformed into:

gcd(int cont, int a, int b) {
if (b == 0) {
cont( a );
} else {
remainder( make-proc( int new-cont(int remainders_result)
{ gcd(cont, b, remainders_result ) } ),
a,
b );
}
}

gcd now never `returns a result'. It gives the result to the continuation
which it has as an (inplicit) argument. The remainder routine now takes a
continuation also. So what magically happens above in the else clause is
that remainder is called with a new continuation argument which `does the
rest of the computation'. [You can think of make-proc as the compiler being
called in-line. Don't think of how C's stack would deal with this. It
doesn't, which is why all the messages on this topic].

You can think of getting a hold of the rest of the computation in the
source language. Now you can ignore it! To escape from an deeply nested
procedure, just ignore the continuation passed in and give your result to a
previously saved continuation. This is what happens with catch/throw in
Lisps or in setjump/longjump in C. But with first class continuations, you
can do more than this. You can invoke a continuation multiple times. You
can capture a continuation at every clock interrupt and switch to another
one (multi-tasking). You can search down a computational path, then go
back to a previous decision and try another path, then save that and go
back to the first path.

It is easy to get confused thinking about this. Let me try to give a more
operational model. I have included the following to give some idea of the
style of code generation this type of transformation leads to...

===================

The following is excerpted from "Evaluating Software Language Technologies"
[Pacific Northwest Software Quality Conference, 1989--you can guess who the
author is].


8. TECHNICAL APPENDIX: Scheme Samples


It is difficult to give the flavor of what programming in a highly paradigmic
language is like in a small example. Here the Scheme programming language
is used to give a small taste. See [AbelSuss85] or [Dybvig] for more
extensive examples.


8.1 A BRIEF INTRODUCTION TO THE PROGRAMMING LANGUAGE SCHEME

Scheme inherits lexical scoping from Algol and syntax from Lisp. The
basic rule for evaluation is to evaluate all sub-expressions between
the balanced parentheses and apply the value of the first to the
values of the others. There are a few "special forms" which do not
evaluate all their arguments. These are noted as they occur.

Examples:
Scheme ~C
(<= lo-bound x hi-bound) ; ((lo-bound <= x) AND (x <= hi-bound))
(+ 23 31 4) ; (23 + 31 + 4)
((if (foo x) * +) 2 5 3) ; (foo(x) ? (2 * 5 * 3) : (2 + 5 + 3))
; I.e. the IF construct returns the function to
; be applied to the arguments 2, 5, and 3.

Scheme has very few fundamental constructs. E.G.:

abstraction: (LAMBDA (x) (* x x)) ; `lambda' indicates an unnamed function
(DEFINE (sq x) (* x x))

application: ((lambda (x) (* x x)) 2) ; should return 4
(sq 2) ; should return 4

conditional: (IF test consequent alternative)
; IF evaluates consequent XOR alternative
assignment: (SET! x 3)

sequencing: (BEGIN (set! x 3) (sq x))
; BEGIN returns the value of the last expression

; A COMMENT begins with a semicolon and runs to the end of the line.

Scheme is extended via syntax transformations. For example LET, which
declares local names can be implemented via lambda abstraction:

(let ((a 1) (b 2)) (+ a b)) <==> ((lambda (a b) (+ a b)) 1 2)

Scheme has excellent abstraction features and supports first class
procedures and continuations.

{Continuations represent the default future of a computation and their use
allows the creation of advanced forms of control structures used for example
to implement coroutines, backtracking and multiprocessing. They have
importance in program transformation strategies as well [Clinger87,
FelFriKohlDu, FriHaKohl84, FriHa85, HaFri84, HaFri87, HaFriWa84, HaFriWa86,
Wand80a, Wand80b].}


8.2 EXAMPLE: Variations on the Greatest Common Denominator (GCD) algorithm

Here are 4 variations on the GCD algorithm. The main point of this series
of examples is to show how a functional algorithm (without assignment) can
be transformed into an imperative one (without function calls). The
secondary objective is to show a number of styles [functional, imperative,
continuation-passing, object-oriented] in the same programming language and
demonstrate a (trivial) use of first class functions.

The first example is functional. The only real function call inside
GCD-1 is to "remainder". The other calls are in the `tail' position.
What this means in practice is that there is no more work to be done in
the function after the last call, so no extra return address has to be
passed. The "tail call" can use the same return address that was passed
in to the main function and so tail recursive loops can be compiled into
simple jumps. Recursive tail calls are just loops.


8.2.1 Functional GCD algorithm -- no assignment


(define (GCD-1 a b)
(if (= b 0)
a
(gcd-1 b (remainder a b))
) )

(define (REMAINDER n d) ; n=numerator, d=denominator
(if (< n d)
n
(remainder (- n d) d) ; This is a `tail-recursive' call. This
) ) ; code compiles into a simple loop with
; no recursion.

;------------------------------------------------------

In this second example, [1] has been transformed into Continuation Passing
Style (CPS). What is happening here is that, instead of returning a result,
each function gets an auxiliary argument, the continuation, which is a
function which gets called with the result. The top-level continuation
(identity in this case) is the only one that ever returns a result.
Essentially, the continuation holds the values which would be on the stack,
but in a form which can be manipulated directly. Note that CPS-REMAINDER
just passes along the same continuation in its loop. It "never has to push
a new return address."

The advantage of CPS is that it makes all temporary variables explicit. It
is done as part of the compilation process by some compilers [AppelJim,
Kranz].

8.2.2 Continuation Passing Style (CPS)


(define (IDENTITY value) value) ; just take a value and return it.

(define (GCD-2 a b) (cps-gpd a b identity))

(define (CPS-GCD a b k) ; k=continuation
(if (= b 0)
(k a) ; result is passed to the continuation
(cps-remainder a b (lambda (v) (cps-gcd b v k)))
) )

(define (CPS-REMAINDER n d k)
(if (< n d)
(k n)
(cps-remainder (- n d) d k)
) )


;------------------------------------------------------

The third example is worth some study. It was derived from [2] but is
basically code for a 2 register machine with a stack. The function calls
are now jumps. A function call is seen as a goto which passes arguments,
one of which is the continuation or return address. Note that the `extra'
parenthesis around "((pop STACK))" means that the result of popping the
stack is a function which is then called. The only real trick between [2]
and [3] is that the arguments are passed implicitly in registers. [You
may have to `hand execute' the code for a few values to convince yourself
that it really works].

The stack abstraction for (8.2.3) is just an example of a simple object in
the object oriented sense. Executing "(make-stack)" is roughly equivalent to
sending a stack `class' the `new' message. What is returned is an object (a
functional closure in this case) with its own local state (the `a-stack'
variable). Executing "(push STACK 5)" is implemented by sending the `push'
message to the STACK object. What is returned is a function which is
applied to the argument, 5, which is added to the stack. There are several
object systems available for Scheme (see e.g. [AdamsRees] for implementation
details).


8.2.3 Register Machine with Stack


; REGISTERS

(define R0 0) ; result and temporary register
(define R1 0) ; temporary register
(define STACK (make-stack)) ; stack abstraction


; ALGORITHM

(define (GCD-3 a b)
; setup initial state & run

(set! R0 a) ; set up registers
(set! R1 b)
(push STACK ; set R0 as result of computation
(lambda () R0)) ; just return its value

(register-gcd) ; jump to initial pc
)

(define (REGISTER-GCD)
(if (= R1 0)
((pop STACK)) ; set PC to addr on stack
(begin
(push STACK R1)
(push STACK gcd-continuation)
(register-remainder) ; jump to label "REGISTER-REMAINDER"
)
) )

(define (REGISTER-REMAINDER)
(if (< R0 R1)
((pop STACK)) ; execute the saved continuation
(begin
(set! R0 (- R0 R1))
(register-remainder) ; jump to label "REGISTER-REMAINDER"
)
) )

(define (GCD-CONTINUATION) ; value in R0
(set! R1 R0)
(set! R0 (pop STACK))
(register-gcd) ; jump to label "register-gcd"
)


; STACK ABSTRACTION

(define (MAKE-STACK) ; --returns a new stack object each time it is called
; Code is shared by all returned "instances".

(let ( (a-stack '()) ) ; This list implementation can be replaced by an
; array for speed, with no change in interface.
(define (stack-push value)
(set! a-stack (cons value a-stack))
)

(define (stack-pop)
(if (null? a-stack)
(error "Attempt to pop from empty stack!" '())
(let ( (result (car a-stack)) )
(set! a-stack (cdr a-stack))
result)
) )

(define (dispatch message) ; message passing style of dispatch
(case message
((push) stack-push) ; return function matching message tag
((pop) stack-pop)
(else (error "STACK: Unrecognized message:" message))
) )

dispatch) ; Return the dispatch function as a new `stack object' instance.
)

; convert message passing interface to function call interface

(define (PUSH stack value)
((stack 'push) value))

;; "(stack 'push)" returns a function which is applied to "value"

(define (POP stack)
((stack 'pop))) ; execute the stack's internal pop routine

;------------------------------------------------------
The fourth and final variant is what a compiler might emit as a result
of compiling the functional definition [1] and inlining remainder. It
is a straight forward transliteration of [3] into pseudo-code (and the
only "code" shown which has not been executed on a computer).


8.2.4 The above (8.2.3) as pseudo machine-code:
*
* START
*
GCD: * ENTRY SETUP -- assume R0 has A and R1 has B at call
store @END-GCD,++(SP) * push code address to come back to
jump REGISTER-GCD * go do the work
END-GCD:
rts * return-from-subroutine (comes back indirect)
* * R0 has result
*
REGISTER-GCD:
zero? R1
brFalse L1
load PC,(SP)-- * pop from stack to PC -- indirect branch
L1:
store R1,++(SP) * push R1
store @GCD-CONTINUATION,++(SP)* push label address (where to continue)
* jump REGISTER-REMAINDER * fallthrough
*
REGISTER-REMAINDER:
less? R0,R1
brFalse L2
load PC,(SP)-- * pop from stack to PC -- indirect branch
L2:
sub R0,R1 * result in R0
jump REGISTER-REMAINDER
*
GCD-CONTINUATION:
regmov R1,R0 * R1 <- R0
load R0,(SP)-- * stack value pop'ed to R0
jump REGISTER-GCD
*
* END
*


7. REFERENCES

[AbelSuss85] {Scheme-usage}


Abelson & Sussman: "Structure and Interpretation of Computer Programs"

MIT Press, 1985.

[AbelSuss86] {Scheme-usage}
H. Abelson & G. Sussman: "Computation, An Introduction to Engineering
Design", MIT Technical report, 1986.

[AbelSuss88] {Scheme-usage}
H. Abelson & G. Sussman: "Lisp: A Language for Stratified Design", Byte
Magazine (McGraw Hill), February 1988.

[AdamsRees] {Scheme-implementation}
N. Adams & J. Rees: "Object-Oriented Programming in Scheme", Proceedings of
the 1988 ACM Conference on Lisp and Functional Programming.

[Appel] {Automatic Storage Management}
Appel, Andrew: "Garbage Collection Can Be Faster Than Stack Allocation",
Information Processing Letters 25 (North Holland), 1987.

[AppelJim] {CPS}
A. Appel & T. Jim: "Continuation-Passing, Closure-Passing Style",
Proceedings 16th Annual Symposium on Principles Of Programming Languages
(ACM), January 1989.

[Barendregt] {Lambda Calculus}
H. Barendregt: "The Lambda Calculus: Its Syntax and Semantics", Studues in
logic and the foundations of mathematics, Volume 103, North-Holland 1984.
...

[Clinger84] {Scheme-compiler: proof of correctness}
W. Clinger:"The Scheme 311 Compiler, An Exercise in Denotational
Semantics", 1984 Symposium on Lisp and Functional Programming.

[Clinger87] {continuations}
W. Clinger: "The Scheme Environment: Continuations" Lisp Pointers, Vol 1
#2, June-July 1987.

[CliFreWa] {denotational semantics}
Clinger, Friedman & Wand: "A scheme for a higher-level semantic algebra"
in "Algebraic methods in Semantics", Ed: Nivat & Reynolds
Cambridge U Press, 1985.
...

[Dybvig] {Scheme-language}
K. Dybvig: "The Scheme Programming Language", Prentice-Hall, 1987.

[FelFriKohlDu] {continuations}
Felleisen, Friedman, Kohlbecker, & Duba: "Reasoning with Continuations",
Proceedings of the Symposium on Logic in Computer Science, IEEE Press, 1986.

[FriHaKohl84] {continuations}
Friedman, Haynes, & Kohlbecker: "Programming with Continuations", in "Program
Transformations and Programming Environments" Ed: P. Pepper, Springer-Verlag
1984.

[FriHa85] {continuations}
Friedman, Haynes: "Constraining Control". Proceedings 12th Annual
Symposium on Principles Of Programming Languages (ACM), January 1985.

[Gabrial89] {Rapid Prototyping-requirements}
R. Gabrial: "Draft Report on Requirements for a Common Prototyping System",
SIGPLAN Notices, Vol 24 #3, March 1989.
...

[HaFri84] {continuations}
Haynes & Friedman: "Engines Build Process Abstractions", 1984 Symposium
on Lisp and Functional Programming.

[HaFri87] {continuations}
Haynes & Friedman: "Abstracting Timed Preemption with Engines", Computer
Languages, Vol 20 #2, 1987 (pub Great Britain).

[HaFriWa84] {continuations}
Haynes, Friedman, & Wand: "Continuations & Coroutines", 1984 Symposium on
Lisp and Functional Programming.

[HaFriWa86] {continuations}
Haynes, Friedman, & Wand: "Obtaining Coroutines with Continuations",
Computer Languages: V11, #3/4 1986.

[HalSus89] {Scheme-usage}
M. Halfant & G. Sussman: "Abstraction in Numerical Methods", Proceedings of
the 1988 ACM Conference on Lisp and Functional Programming.
...

[Jones]
S. Peyton Jones: "Functional programming languages as a software engineering
tool", Lecture Notes in Computer Science, Vol 287, Springer-Verlag, 1987.

[Kranz] {Scheme-implementation}
Kranz et al: "Orbit, an Optimizing Compiler for Scheme", SigPlan Notices,
Vol 21 #7, July 1986.
...

[PohlEdelson]
I. Pohl & D. Edelson: "A to Z: C Language Shortcomings", Computer
Language, Vol 13 #2, 1988 [Pergamon Press, Great Britain].

[Reese84] {Scheme, T}
Reese, Adams, & Meehan: "The T Manual", 4th Ed.
Yale U. January, 1984.

[ReesCling86] {Scheme-language}
J. Reese & W. Clinger (eds): "Revised^3 Report on the Algorithmic
Language Scheme", SigPlan Notices, Vol 21 #12, December 1986.

[Roylance89] {Scheme-usage}
G. Roylance: "Expressing Mathematical Subroutines Constructively", Proceedings of
the 1988 ACM Conference on Lisp and Functional Programming.

[Schmidt] {denotational semantics}
Schmidt, David: "Denotational Semantics: A Methodology for Language
Development", Allyn & Bacon, 1986.

[Scott] {denotational semantics}
Scott, Dana: "Domains for Denotational Semantics" ICALP '82, Aarhus,
Denmark, July 1982.
...

[Stoy] {denotational semantics}
Stoy, Joseph: "Denotational Semantics: The Scott-Strachey Approach to
Programming Language Theory", MIT Press, 1977.
...

[Wand80a] {continuations}
Wand, Mitchell: "Continuation-Based Program Transformation Strategies",
JACM V 27, #1, January 1980.

[Wand80b] {continuations}
Wand: "Continuation-Based Multiprocessing", Proceedings of the 1980 Lisp
Conference.

John Gateley

unread,
Dec 11, 1989, 6:27:32 PM12/11/89
to
In article <65...@pbhyf.PacBell.COM> js...@PacBell.COM (Joshua Stein) writes:
>Can someone post a non-implementation dependent definition of continuations;
>what they are, what they do, and how they fit into the object-oriented paradigm.

A continuation of some point during the execution of a program is a function
which accepts any results at the point, and returns the result of the
rest of the program. To be more technical, you need to define points occuring
during execution and so on.

Basically continuations are a different flavor of goto. They can do forward
jumping (called escape continuations) where you want to punt and jump
out of 20 levels of function calls at once. They can do backwards jumping
where you invoke the continuation after you passed the point in the program
which created it (causing looping).

They are used to model different control constructs. Coroutines are the
canonical example; it is very easy to model coroutines given continuations.
They are extremely powerful, and consequently extremely dangerous, and
also tend to be as controversial as gotos.

I don't know how they fit in an object oriented paradigm.

John
gat...@m2.csc.ti.com

Ge' Weijers

unread,
Dec 12, 1989, 11:01:39 AM12/12/89
to
ch...@Ozona.orc.olivetti.com (David Chase) writes:
>However, the languages listed differ in their treatment of nested
>procedures:

>C forbids them (no closures).

>Pascal allows them, but (in Pascal Classic, at least) forbids
> their assignment to variables, use as parameters to
> procedures, or use in a RETURN. (no closures, because it
> turns out that a display can be used).

Not quite true, in the original Pascal procedures were allowed as parameters,
though type-safety goes out of the window, as the parameters were not indicated.

PROCEDURE example (PROCEDURE error);
BEGIN
IF NOT (assertion)
THEN error(4711);
END example;

I believe something like this was actually legal. In more current versions of
the language the procedure must be typed.
mr. Wirth probably did not dare to throw away the Algol-60 call by name
at the time, as some people thought is was useful. They still are.

>Modula-3 allows them, but forbids their assignment to variables or
> their use in a RETURN. (closures needed, but must obey
> stack lifetime rules). (I hope this example isn't too
> obscure -- I'm rather familiar with it at the moment).

Just like in Algol 68. Only the lifetime rule was very tricky there:
the lifetime of a procedure is the lifetime of the 'youngest' non-local
name accessed. The lifetime of a function not referencing any globals was
therefore infinite, whereever it was declared.
Ge' Weijers Internet/UUCP: g...@cs.kun.nl
Faculty of Mathematics and Computer Science, (uunet.uu.net!cs.kun.nl!ge)
University of Nijmegen, Toernooiveld 1
6525 ED Nijmegen, the Netherlands tel. +3180612483 (UTC-2)

0 new messages