He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".
And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code, is not found in any production programming
environment with which I am familiar. Let's not forget the Intel
432...which was OOP in silicon, and it failed because it was just too
slow. If we couldn't make OOP efficient in hardware, why do we think we
can make it efficient when we emulate it in software?"
And the conclusion: "OOP runs counter to much prevailing programming
practice and experience: in allocating and controlling software costs,
in modularity, in persistent state, in reuse, in interoperability, and
in the separation of data and program. Running counter to the
prevailing wisdom does not, of course, automatically make an innovation
suspect but neither does it automatically recommend it. To date, in my
opinion, advocates of OOP have not provided us with either the
qualitative arguments or the quantitative evidence we need to discard
the lessons painfully learned during the first 40 years of programming".
Any comments?
--
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjo...@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Binghamton NY 13901
V All the world is biscuit shaped. . .
Pardon me, but is he discussing any OOP environment that is actually in
use??? How does C++ sacrifice runtime efficiency in ANY manner? (I mean,
if you've got to have virtual something you've got to have virtual
SOMETHING ...) How is Smalltalk deficient (compared to, say, FORTRAN)
in the quality, usability and maintainability of code? Does Mr. Guthrey
perhaps prefer ADA? (Or, then again, maybe that's what he things represents
the future of OOP.)
DDJ has lately become a nasty little magazine in which I've seen all kinds
off-the-wall viewpoints ... as an aside, I remember a few months ago when
one of the contributors stated flatly that there was no viable system for
winning at blackjack and that we shouldn't bother him with letters saying
there was ... now I have the perspective of someone who worked with a friend
on a thorough, closed numerical analysis of Atlantic City BJ and I can tell
you that their expected take per hand IF you play correctly is about .5%,
and that if you count correctly you can periodically find the deck in such
shape that YOU can eke out .1-.5% -- it's pretty tough on your brain, though.
But anyway, this lofty, uninformed tone is too prevalent in DDJ for me
nowadays and I just don't enjoy reading the magazine.
Sigh ... guess I'll have to go out and get this one, though ...
As others have said and others will say after me, if you want convincing
arguments FOR OOP, consult Bertrand Meyer's "Object-Oriented Software
Construction" -- heavy Eiffel slant, but the first few chapters are quite
general and tremendously readable.
v v sssss|| joseph hall || 4116 Brewster Drive
v v s s || j...@ecemwl.ncsu.edu (Internet) || Raleigh, NC 27606
v sss || SP Software/CAD Tool Developer, Mac Hacker and Keyboardist
-----------|| Disclaimer: NCSU may not share my views, but is welcome to.
> Any comments?
Yes. This article, like so many in DDJ was bogus. The author carefully
concocted his arguments with the absolute worst examples that he could find
in the OOP world and presented the problems as if each one were typically found
in *every* OOP language, program, and system. What is worse is that he did
not acknowledge that the very same problems exist in the structured programming
way of doing things.
The following comments are little gripes for those who have read the article.
If you haven't read it, you probably want to hit n now.
His example of the Intel 432 was especially poor. Do all object oriented
languages use the 432 model? Not at all. Why didn't he mention SOAR? Or
has he even heard of it?
I'm glad that Grace Hopper used OOP in 1944. I wish he would have shown us
some of her code -- it would have made interesting reading. Unfortunately I
doubt that *he* has seen it. And what language did she use? What's that you
say? Object oriented assembler? Well, it doesn't surprise me that the author
feels that assembler is an OOP. After all, he thinks that Algol and Fortran
II *FULLY* supported objects. Get real!
The author thinks that Stroustrup wrote an article in 1980 that described C++
and complains that 2.0 is incompatible with that description and that its
description isn't complete yet. He goes further and gripes saying "Can
anything that's this hard to define be good for writing understandable and
maintainable programs?" Well, I'm not going to argue the merits of C++, but
a language called "C with classes" was described in the paper referenced by
the author. It wasn't C++. Furthermore, I understand that the author's
beloved Fortran language description isn't complete yet. And it won't be
as long as people use Fortran.
The author also writes at some length about how ineffecient OOP languages are,
why reusability is bad, etc. He also makes some arguments as to how inheritence
is misused and why it is bad for team projects. To back up his assertions he
gives anecdotal evidence about some projects he was involved with. Well,
I am quite sure there are plenty of bad examples that can be given for both
OOP and structured methods. I've seen a few with my own eyes. The author
fails in this area, as in the rest of the article, to give enough concrete
information to back up his claims. He doesn't say what language was used, how
many people were involved, what the goals of the projects were, what kind of
support the management gave the projects, and last but certainly not least,
how much experience the team members had with the language used and OOP in
general. Content free arguments such as his are to be ignored.
I could go on, but to sum it up, the author makes up his own definition of
what he thinks object oriented programming is and proceeds to blast it
using poorly constructed, hear-say arguments. The article is virtually
free of any hard information that might back up the author's assertions.
I do have to agree with the author on one thing. There is far too much hype
being spread about the benefits that object-oriented programming will bring
to us all. I think that OOP is a GOOD THING and an improvement over the
structured methodologies that languages such as Algol, Fortran, etc. have
spawned. However, we still have a long road to travel before the benefits of
OOP are realized for most of us.
>
> --
> O------------------------------------------------------------------------->
> | Cliff Joslyn, Cybernetician at Large, cjo...@bingvaxu.cc.binghamton.edu
> | Systems Science, SUNY Binghamton, Binghamton NY 13901
> V All the world is biscuit shaped. . .
Barry Locklear
AT&T Bell Labs
attunix!lbl
l...@sf.att.com
(415) 324-6019
Stop reading arguments and try some OOP programming! Then read
the arguments and make up your own mind ;-)
>He says a lot of things, but one quote: "Stripped of its fancy jargon,
>an object is a lexically-scoped subroutine with multiple mutiple entry
>points and persistent state. ...
I'm glad he stripped the jargon. Too bad in removing the jargon
he left off inheritance, polymorphism and encapsulation - which are central
to an OOP approach to problem solving.
>OOP has been around since subroutines were
>invented in the 1940s. Objects were fully supported in the early
>programming languages AED-0, Algol, and Fortran II. OOP was, however,
>regarded as bad programming style by Fortran aficionados".
>
It's important to separate objects (data + methods) from the
rest of OOP. Fortran had multiple entry points (blech!) but didn't have
static data (i.e. persistent state) unless you think that global (common)
data qualifies. You might want to read Peter Wegner's "Dimensions of
Object-based Language Design" in OOPSLA '87 proceedings, where he
distinguishes between object-based and class-based and object-oriented
languages.
A real problem is that languages like Fortran do not map well onto
a total OOP approach to problem solving, and partial comparisons and
implementations are sadly lacking.
>... " Let's not forget the Intel
>432...which was OOP in silicon, and it failed because it was just too
>slow. If we couldn't make OOP efficient in hardware, why do we think we
>can make it efficient when we emulate it in software?"
The 432 was far ahead of its time - hardware and software wise.
Condemning OOP because of the 432 is like condemning aviation because
DaVinci's helicopter wouldn't fly. If the 432 were being designed today
with our accumulated knowledge of design of really big and complex ICs,
it would have a far better chance of succeeding.
OOP provides a view of modeling that does not necessarily match
the von Neuman view of computing machinery. But then, all languages are
compromises of the problem set and the target hardware. If your sole
measurement of success is execution efficiency on a particular machine,
then use an assembly language - but then, what can you do while waiting
for the user to hit return?
Basic and more recently Hypercard have succeeding more due to
their accessibilty to programmers (and users!) than to their run time
efficiency.
>And the conclusion: "OOP runs counter to much prevailing programming
>practice and experience: in allocating and controlling software costs,
>in modularity, in persistent state, in reuse, in interoperability, and
>in the separation of data and program. Running counter to the
>prevailing wisdom does not, of course, automatically make an innovation
>suspect but neither does it automatically recommend it. To date, in my
>opinion, advocates of OOP have not provided us with either the
>qualitative arguments or the quantitative evidence we need to discard
>the lessons painfully learned during the first 40 years of programming".
>
Anything new runs counter to prevailing practice and experience by
definition. We owe it to ourselves to investigate *new* approaches to
problem solving. The resurging interest in OOP seems to be coming at
at time when machines are powerful enough to support such concepts *and*
our software technology is also growing. (Compare the initial release
of Smalltalk on the Mac from Apple to Digitalk as an example. It's hard
to believe the performance differences between the two!)
Finally, prevailing practice and experience are not necessarily
the same thing as prevailing *wisdom*, in fact, the growth in the number
of articles on OOP indicate that the wisdom is growing. If this wisdom
shows that OOP is good, practice and experience will follow. If not,
it then we still will have learned something. Either case, we win.
It is kind of scary to think that the Fortran model - one of our
first tries at programming - was the right way ;-).
< dave stampf
> I do have to agree with the author on one thing. There is far too much hype
> being spread about the benefits that object-oriented programming will bring
> to us all.
Hear, hear! Down with strongly-hyped languages!
Object-oriented programming is a tool. A useful tool indeed,
that can be applied in many contexts -- but not all! Part of
being a professional is knowing what tools are available and
how and when to apply them.
--
--Andrew Koenig
a...@europa.att.com
lee
I'm not an OOP programmer either, and I haven't seen the article. I am
a Fortran programmer, most of the time, and, from the quotes here it
sounds like this article is a good nominee for "most bogus article
appearing in a formerly reputable journal." An object implemented as a
Fortran subroutine with multiple entries allows only *one* instance of
the object, unless you explicitly set up and save state for each instance.
This hardly qualifies as "fully supported."
What the author calls "OOP" is indeed (usually) bad programming style in
Fortran. This reflects the lack of useful support in the Fortran language
for this style of programming; it says nothing at all about the usefulness
of OOP techniques in general (or specifically in languages designed to
support OOP).
-Dan Riley (ri...@tcgould.tn.cornell.edu, cornell!batcomputer!riley)
-Wilson Lab, Cornell University
After reading the article, I initially thought that the article was a slam
against OOP, but after I thought about what Scott said, it seems to me after
some reflection that much of his critism concerned the current
_implementations_ of OOP. What I liked about the article was that it took
a critical look at OOP, and hopefully now some the drawbacks to current
implementations of OOP can be addressed, remebering that OOP should not be
a panacea but used when appropriate.
I did have some observations about what he said about object heirarchies:
"The unit of reuse in object-oriented programming is the
heirarchy. It is the heirarchy and not the object that is
the indivisible whole. Unlike a subroutine library where
you can just take what you need and no more, in object
oriented programming you get the whole gorilla even if you
just wanted the banana." -- Scott Guthery
I think this could definitely be the case under some implementations,
where the link stip is static, done before run-time. In this case, unused
methods are still included, even though during run-time you may not use
them. People have seen this happen, I believe that the 'linker' that
comes with Turbo Pascal 5.5 tries to strip unneeded methods while making the
binary executable.
However, I think a better approach is already possible. Provided module
granularity is sufficiently fine to increase performance, unused methods
could be kept on disk, being paged in as needed, and if their use during
run-time decreases, could be paged out. This can be done today
in OS/2, Windows, and any other OS that does paging at run-time. However,
it seems to me (correct me if wrong) that this paging will have to OOP
sensitive to be effective, and I don't know enough if this mechanism could
co-exist with preexisting software that is not OOP-developed. It is being
looked into, witness the anouncement of a new memory-management sytem
developed for Quattro Pro by Borland, called VROOMM (Virtual Real-time Object
Oriented Memory Manager) that uses an LRU algorithm to keep most frequently
used objects in RAM.
Related to this, Scott also had some observations about performance-tuning
OOP systems, where he related the ineffiency of OOP systems to that of the
Intel 432 project. I think there is one difference between software and
and firmware; firmware is usually needs to be pretty static, while software
has the necessary environment to be more dynamic in its use of primary and
auxiliary memory. I think the key here and debugging OOP-based systems is
that our run-time environment will have to change in order to actively
support OOP systems. In terms of debugging, the OS will have more flexible
in the handling of run-time errors, and will have to give the developer more
information about the state not only of the machine, but of the OOP system
as well; its like the next logical progression from assembly-level
debugging to source-level debugging, to (now) object-level debugging.
In summary, I think that the OS, or operating environment, will have to
change to allow OOP to be used effectively. What we have seen now is really
only a change in languages and development tools; what we need now are
operating environments that actively support OOP systems, while providing
the performance user demand and the control developers need.
Hasta...
Dario
+===============================+==================================+
| Dario Alcocer (San Diego, CA) | Internet......dar@nucleus.mi.org |
| via Nucleus | phone...............619-450-1667 |
+===============================+==================================+
From cjo...@bingvaxu.cc.binghamton.edu and his quote of Guthery:
He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".
This would be true if it were only possible to make a single instance
of each object class. By making a C file with static variables and
several different functions that manipulate those variables, you can
get something close to object oriented programming. However, this
style is NOT OOP. It is difficult to maintain and has been abandoned
by OOP practioners for very good reason. Guthery's mention of
multiple entry points isn't even close.
And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code, is not found in any production programming
environment with which I am familiar.
This hardly deserves a reply, especially given the data reviewed on
the net recently showing comparable speed for C and C++ programs. One
point I'd like to make though is that sometimes the programmer is the
user. That is, programmers USE library code and system facilities to
program. Less sophisticated users also write code on a routine basis,
only that fact is usually obscured by a fancy interface, like a
spreadsheet.
Now for what I feel is a valid criticism. Object Oriented
Programs are generally not reentrant. This is generally not important
to most application writers, but an OO OS or embedded systems writer
would need to recognize that.
Paul Vaughan, MCC CAD Program | ARPA: vau...@mcc.com | Phone: [512] 338-3639
Box 200195, Austin, TX 78720 | UUCP: ...!cs.utexas.edu!milano!cadillac!vaughan
He says a lot of things, but one quote: "Stripped of its fancy jargon,
an object is a lexically-scoped subroutine with multiple mutiple entry
points and persistent state. OOP has been around since subroutines were
invented in the 1940s. Objects were fully supported in the early
programming languages AED-0, Algol, and Fortran II. OOP was, however,
regarded as bad programming style by Fortran aficionados".
He misses the point entirely. Fortran II and Algol are sort of like
object-oriented programming where you can only have one instance of
any given class. This makes it pretty useless.
Of course, you can do object-oriented programming in any language
whatsoever. The X Window System is written in C and uses
object-oriented programming with inheritance. The language (C) does
not provide *support* for this style, so it ends up being verbose and
clumsy, but it does work. Another interesting example is Sun's RPC,
the source code of which is free (you can get it from uunet). It uses
very simple object-oriented programming in C; no inheritance, but you
can clearly see that it's OOP with instances and with the equivalent
of virtual functions. They again have to do all the work by hand, but
they follow consistent stylistic guidelines so it doesn't mess the
code up too much. If Sun RPC were not written in that style, it would
be a much worse piece of software. They use OOP in order to allow
extensibility: several alternative network protocols, several
alternative authentication methods, several kinds of XDR stream.
And more: "...the programmer is invited to pass the cost of expedience
onto the user of the system. This wholesale sacrificing of runtime
efficiency
This, of course, depends a lot on how much runtime efficiency is being
"sacrificed". The overhead in C++ is very small; this has been discussed
to death on this newsgroup.
to programmer's convenience, this emphasis on the ease with
which code is generated to the exclusion of the quality, usability, and
maintainability of that code,
That's really beyond the pale. OOP, in my experience, properly used,
greatly increases quality, usability and maintainability.
is not found in any production programming
environment with which I am familiar. Let's not forget the Intel
432...which was OOP in silicon, and it failed because it was just too
slow. If we couldn't make OOP efficient in hardware, why do we think we
can make it efficient when we emulate it in software?"
What an illogical argument. Some people at Intel figured out how to
do something very slowly, and they used hardware to do it,
therefore... Anyone who believes this has his brain turned off.
To say that this piece is poorly reasoned is to be too kind.
Dan Weinreb Object Design d...@odi.com
This is not a correct definition of OOP for the following reasons:
1) It completely ignores the issues of inheritance, data abstraction,
and data-hiding.
2) Objects aren't necessarily lexically-scoped (look at ST-80 blocks,
for instance).
3) (Maybe a bit picky as to terminology, but ...) an object really
can't be compared to a single subroutine, or even a single coroutine. It
might be better compared to a multi-threaded server, which provides
services (one per method) to access and modify a persistent state.
> OOP has been around since subroutines were
>invented in the 1940s. Objects were fully supported in the early
>programming languages AED-0, Algol, and Fortran II. OOP was, however,
>regarded as bad programming style by Fortran aficionados".
Again, not true if you look at the issues in 1) above.
>
>And more: "...the programmer is invited to pass the cost of expedience
>onto the user of the system. This wholesale sacrificing of runtime
>efficiency to programmer's convenience,
Ask a veteran C++ programmer about this "wholesale sacrificing". I have
programmed in OO Lisp, Smalltalk, and C++ (I've been primarily a
systems-level C programmer for the last six years, and before that I did a
lot of assembly language). I don't feel that I have sacrificed performance
to anything; in some cases I doubt that it would have been at all possible
to code the sorts of things I have done in C++ in C and get more than a few
percent improvement, and doing so would have made the projects I was
working on impractical (see my next comment).
> this emphasis on the ease with
>which code is generated to the exclusion of the quality, usability, and
>maintainability of that code, is not found in any production programming
>environment with which I am familiar.
Then it should be, because all three of those qualities are intimately
intertwined with ease of programming. Using OOP has allowed me to build
systems which are much MORE robust against changes, and much easier to
modify and enhance than would have been practical in C without much more
time and energy, which would not have been allowed on the projects.
Certainly, the data encapsulation alone is a major factor in increased
quality. Granted, that's not unique to OOP, but the behavior encapsulation
(inheritance and overriding of methods) is unique to OOP, and it is an
additional factor in improving quality.
> Let's not forget the Intel
>432...which was OOP in silicon, and it failed because it was just too
>slow. If we couldn't make OOP efficient in hardware, why do we think we
>can make it efficient when we emulate it in software?"
Yes, let's not forget it, but let's remember why it failed (I was in
another division at Intel at the time, and had reasonably good visibility
into the 432 project; in fact I had a prototype 432 board running in my
Intel blue box!). The problem wasn't OOP, it was in the implementation of
the silicon, and the way the software was layered on it. For instance, in
the Rev. 3 chip set, simply turning the access control data structures
upside down and addressing them off the object data pointers speeded almost
everything up by a factor of two. Also, because they couldn't fit as many
gates on a chip as they had originally planned, the processor had to be
put in two chips, with a 16-bit bus in between, which slowed things up
some. And so on. By the time they had the project far enough along to
start doing even the simplest optimizations, Marketing had already
published benchmarks which said that the 432 couldn't beat a lame dog into the
bath water, and the resultant public bad-mouthing killed any chance of
selling to anyone.
>And the conclusion: "OOP runs counter to much prevailing programming
>practice and experience: in allocating and controlling software costs,
>in modularity, in persistent state, in reuse, in interoperability, and
>in the separation of data and program.
I can't resist commenting on this remark about the separation of data and
program. It shows to me that the author of the article doesn't understand
OOP implementation at all. Certainly there are some OOPLs which allow you
to execute data (that's what LISP does for a living), but in the normal
course of events (i.e., if you don't twist things around to make it happen
yourself), neither Smalltalk nor C++ will do so. An object's state is
data, the methods which operate on it are code, and the two are separate.
You might choose which method to use on a given bit of state, or which
state to operate on, by looking at which or which kind of object you have,
but that's just normal programming practice. If you have ever coded a
finite-state machine you have built what amounts to an instance of an
object. Send it an input (invoke a method) and it will change its state
and (maybe) provide some output.
> Running counter to the
>prevailing wisdom does not, of course, automatically make an innovation
>suspect but neither does it automatically recommend it. To date, in my
>opinion, advocates of OOP have not provided us with either the
>qualitative arguments or the quantitative evidence we need to discard
>the lessons painfully learned during the first 40 years of programming".
On the contrary, OOP builds on lessons such as data abstraction.
>
>Any comments?
>
The rhetoric is very dramatic, but hardly compelling. I would be swayed
more if I believed that the author knew more about the subject being thrashed.
"Small men in padded bras don't look the same falling from high places."
- R.A. MacAvoy, "The Third Eagle"
Bruce Cohen
bru...@orca.wv.tek.com
Interactive Technologies Division, Tektronix, Inc.
M/S 61-028, P.O. Box 1000, Wilsonville, OR 97070
This is not the least bit compelling, because it is seriously incorrect
in many ways. The first and foremost is that there is no one language
which is efficient or inefficient. Individual implementations are
either relatively efficient or relatively inefficient. This goes for
any program.
There has been a large amount of interest in C++ lately. Why? Because
it does present a better model for programming than C, with minimal
performance penalty.
And please illuminate me (this is rhetorical, not directed at any
of the people whose text is included here) as two how you can generate
code easily, without it being quality, usable and maintable? Any code
which is not the last three can hardly be called a program. (As an
aside, I find it amusing that we call programs "codes")
>Pardon me, but is he discussing any OOP environment that is actually in
>use??? How does C++ sacrifice runtime efficiency in ANY manner? (I mean,
>if you've got to have virtual something you've got to have virtual
>SOMETHING ...) How is Smalltalk deficient (compared to, say, FORTRAN)
>in the quality, usability and maintainability of code? Does Mr. Guthrey
>perhaps prefer ADA? (Or, then again, maybe that's what he things represents
>the future of OOP.)
Smalltalk was typically implemented as an interpreter of course, which
made its environments much easier to use than your typical FORTRAN
system of course :-) It is comparing apples and oranges to compare the
speed of FORTRAN to the speed of Smalltalk. There have of course been
work on making Smalltalk a more efficient language, most notably the
SOAR project, which had some impressive results.
Like every other overly hyped topic, I feel that OOP has its merits and
its drawbacks. I never tire to hear of legitimate and well thought out
criticisms, or equally deserved praise. Mr. Guthry on the other hand,
succeeds in making himself look foolish by perhaps arguing a valid
point, but for the wrong reasons.
Mark VandeWettering (even I don't share my opinions, why should my employer)
> He says a lot of things, but one quote: "Stripped of its fancy jargon,
> an object is a lexically-scoped subroutine with multiple mutiple entry
> points and persistent state. OOP has been around since subroutines were
> invented in the 1940s. Objects were fully supported in the early
> programming languages AED-0, Algol, and Fortran II. OOP was, however,
> regarded as bad programming style by Fortran aficionados".
Nope. Sorry. Uh uh. Wrong.
I might actually agree with the definition of objects. If you chose to
implement an object in a language, say Scheme, one obvious
implementation is as a closure with a message dispatcher (multiple entry
points) and persistant state maintained inside the closure.
What about inheritance? Instantiation of multiple instances of a class?
There is more to objects than just lexical closures.
And of couse, the languages he dredges up had absolutely no support for
object oriented programming. Sure, you might be able to fake something
that vaguely resembled an object, but it is obvious that you are faking
it.
> Let's not forget the Intel
> 432...which was OOP in silicon, and it failed because it was just too
> slow. If we couldn't make OOP efficient in hardware, why do we think we
> can make it efficient when we emulate it in software?"
I haven't forgotten it: has Mr. Guthry ever read anything other than
glossy BS about it?
I hate this argument. First of all, the 432 had many other goals
besides being an effective object oriented processor. It was supposed
to be easily entendible to multiprocessor configurations (requiring no
code changes), data protection, a sophisticated fault mechanism, with
recovery. It had many noble goals. It succeeded at alot of them.
It was a huge chip. It was slow. It had no cache. It was a commercial
failure. So what? It succeeded at several of its goals.
But just building something "in hardware" doesn't make it fast. It
takes an appropriate design, both for the chip, compilers and programs.
That technology is improving.
Mark VandeWettering
I haven't seen the DDJ print, but my advance notes include a summary
of "Twenty-Five Reasons" in 9 sections. Under "Theory", all of the
points apply equally to all programs using structures and pointers,
"Development" makes one valid point of there being a lack of proper
tools (Scott thinks toolmaking is bad, fortunately he doesn't build
ships), with 7 other points which apply to all development, "Debugging"
give 5 non-points where the fifth merely complains that the old tools
are not applicable, "Maintenance" likewise applies in all 7 points to
many programming systems, "Persistent State" shows three points common
in any program which uses trees & graphs, and the remaining sections
likewise jump on general problems as if they were peculiar to OOP.
This is not to praise OOP or to can Scott's well-said article, it's
just that I wish he'd stayed with 9 or ten good points rather than
bloating otherwise valid arguments with non-issues. For instance
(oops! can I say 'instance'?), he complains that OOP requires retooling
the development organization without regard for the retooling we did
when we shifted from patch-cords to punch-cards to consoles or from
programming in Hex to Fortran and later to C, Prolog or whatever. He
adds that he doesn't want YAPL (yet another programming language).
I think we all should read this. Everyone here found it at least
amusing. He has some very valid points which should be addressed by
any OOP plans and clearly dispels some of the marketting hype
surrounding OOP. I'd caution anyone not to rely on this paper when
deciding for or against OOP, but to instead use it as a guide for
which questions to ask.--
Gary Murphy decvax!utzoo!dciem!nrcaer!cognos!garym
(garym%cogno...@uunet.uu.net)
(613) 738-1338 x5537 Cognos Inc. P.O. Box 9707 Ottawa K1G 3N3
"There are many things which do not concern the process" - Joan of Arc
Granted. But lets look at some figures:
pay_per_hour = rate_of_win * size_of_bet * bets_per_hour
Assuming:
rate_of_win = .005 (.5%)
size_of_bet = $500 (house limit)
bets_per_hr = 20 (3 minutes per hand)
gives a pay_per_hour of $50/hr. Hardly raking it in for such exhausting work.
There are also serious problems because this is an *average*, which means
there are possibilities of going $10 grand in the hole pretty easilly.
You're gonna need a large bankroll and a lot of guts to stick it out till
the odds have a chance to work for you.
I'll stick to software, which pays better, and the work is easy :-).
Of course, I don't have a pretty girl plying me with drinks while I program :-)
All in all, I'd have to agree that blackjack is not viable. Gambling in
Vegas is strictly for 1) suckers and 2) entertainment.
Sort of. He makes a couple of good points addressed towards current
*implementations* of OOPLs, but has trouble even coming close towards
the broader side of the barn, that is addressing the general notion
of object-oriented program construction as an abstract method.
I'll address his points as laid out in the sections of the article:
- Where's the Evidence? [..for productivity increase..]
Guthery piles up some assumptions and points to some studies showing
little or no gain in productivity. Yet I've seen dozens of measurements
showing that OOPLs result in equivalent functionality with fewer lines
of code, and with shorter debugging times. I made such a small study
myself (though admitedly, my OOPL wasn't really one, lacking inheritence...
nevertheless). OOPLs aren't the silver bullet, but I'd say there's
plenty of evidence that they aren't chopped liver, either.
- What is an Object?
Guthery's explanation of what an object is "stripped of all the jargon"
is so far away from accuracy that it is hard to point out just where
his analogy IS accurate. Mainly, it is accurate in code locality and
the association of multiple names for operations. The analogy he
presents fails in most else, and yet these other points are what he
associates with OOPLs, and condemns them for. MOST bogus. One of
these, the "persistence" point, is addressed in more detail later
by Guthery, so I'll address it in more detail there also. Here I'll
just say that characterizing the specification of the operations
available on an object as "multiple entry points" is as blatantly
misleading as anything I've ever heard. And I've heard some doozies.
- What's the Cost of OOP Code Reuse?
Having claimed that things in OOPland haven't been measured in the
"Where's the Evidence?" section, Guthery nevertheless ploughs ahead
and makes bald unsupported statements such as "Your system will be
bigger, run slower, and cost more to maintain than with subroutine-
style reuse." Where is the evidence of this, other than Guthery's
handwaving? It runs counter to my experience.
- How to Combine Object Heirarchies
Breifly, the examples he gives of this are bogus, and are better
represented as objects containing other objects as part of their
state rather than of examples inheritence. So the answer to the
question of "how to combine object heirarchies" is "certainly not
the way Guthery recommends or imagines it to be done".
- How to Tune an Object-Oriented Program?
Here the issue of monitoring and debugging OOPLs is butchered. It is
true that some OOPLs are worse off than some procedural languages as far
as debugging and monitoring tools, C++ in particular having debugging
problems. But this problem is not universal, and many of the
performance problems Guthery attributes to OOP methods are in fact
generic code re-use problems. For example, "the performance problem may
be in the code of a class you didn't implement", which is the same as
objecting to the use of the standard C libraries because they might
contain some slow routines that could be bettered. If a given class
from a library is too slow, fix it (if you own or control the owner of
the library), negotiate to get it fixed (if you can negotiate with the
owner), or create your own class with equivalent interface. These are
the same options open to somebody who finds that (say) qsort(3) is a
bubble sort in some unfortunate instance of the C library.
So, while C++ may be somewhat hard to debug and monitor, this is primarily
because the most common C++ compiler (cfront) produces no persistent
debugging database for any debugger or monitor to use. In cases where
this problem doesn't exist (eg: G++ and gdb, or SmallTalk systems),
debugging and monitoring facilities are excellent.
And of course, the side-issue of the Intel 432 is NOT a slam on OOP methods.
I've seen many convincing analyses of why the 432 was slow, and it was
NOT because of OOP.
- Do Object Oriented Programs Coexist?
Here Guthery makes one of his near-approaches to sense.
OOPLs, because they hide their data representations, can make it
arbitrarily hard to co-exist with other OOPLs or even other procedural
languages. Yet these problems aren't problems of an OOPL, they are
problems of ALL languages where the language system takes extreme pains
to keep control of data formats and releive the user of housekeeping
chores. For examples Lisp systems (or in general, systems with extreme
garbage collection). Yet even in these systems, it is not universal
that they cannot deal with foreign language systems (DG Lisp for AOS/VS
is the case I'm most familiar with in this regard). And, of course, C++
can deal with foreign code quite well.
- What are the Consequences of Persistent State?
The argument here is silly. Objects have state no more persistent than
primitive datatypes. Guthery's argument is like complaining that a
record type representing a symbol-table node is an evil thing, because
it contains persistent state. The state is no more persistent than the
object itself, and might well be automatic, and well scoped indeed. As
mentioned above, this whole argument is based on the idea that objects
are "really" groups of functions with a single instance of shared data
of process persistence. This is simply not the case.
- Can You Get the Development System Out of the Production System?
Here Guthery makes the nearest approach to sense that he manages.
But again, this is not a problem with OOPLs, but with all languages
with extensive and interdependant libraries. Lisp systems are
particularly vulnerable to this, because references to parts of the
system can easily be discovered only as late as runtime. SNOBOL
systems also. So again, the problem of "applications delivery"
as I've heard it called is simply not a problem of OOPLs in general,
(for example, it isn't much of a problem in C++, really). It is
more of a problem of systems that blur the distinction between
compile-time and run-time (that is, interpretive systems).
So, to sum up, Guthery is simply missing the point most of the time.
His faulty analogy has led him on a wild goose chase after purported
problems with OOP with are really problems either with his mistaken
analogy, or with some (but not all) OOPLs.
So, put it this way.
The Baby: OOP as a method of design and implementation.
The Bathwater: Some insular or immature OOPLs.
It just doesn't make sense to throw out the former while changing
the latter.
--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!thr...@rti.rti.org
"Stripped of their fancy jargon, 'break', 'continue' and 'return' statements
are simply goto statements. The goto statement has been around since
programming languages were invented in the 1940s. It was fully supported in
the early programming languages (...). The goto statement was, however,
regarded as bad programming style by structured programming aficionados."
Therefore, of course, they are just as bad as goto's, and should be
eschewed for the same reasons.
:-)
--
geoff george Internet: fro...@pyr.gatech.edu
uucp: ...!{decvax,hplabs,ihnp4,linus,rutgers,seismo}!gatech!gitpyr!frobozz
"Ordinary f---ing people - I hate 'em. Ordinary person spends his life avoiding tense situations; repo man spends his life getting INTO tense situations."
>Granted. But lets look at some figures:
> pay_per_hour = rate_of_win * size_of_bet * bets_per_hour
>Assuming:
> rate_of_win = .005 (.5%)
> size_of_bet = $500 (house limit)
> bets_per_hr = 20 (3 minutes per hand)
>gives a pay_per_hour of $50/hr. Hardly raking it in for such exhausting work.
>There are also serious problems because this is an *average*, which means
>there are possibilities of going $10 grand in the hole pretty easilly.
>You're gonna need a large bankroll and a lot of guts to stick it out till
>the odds have a chance to work for you.
There is no table in Vegas that takes 3 minutes to play a single
blackjack hand. They should be able to sustain a much higher rate. You
also have neglected the possibility of a single player playing multiple
hands.
Other ways to extract maximum profit in minimum time is to vary betsize,
although doing so is a pretty sure fire way to tip of the house that you
are counting.
>All in all, I'd have to agree that blackjack is not viable. Gambling in
>Vegas is strictly for 1) suckers and 2) entertainment.
If you are gambling, you are a sucker. If you have an edge (and in
blackjack, it might be possible depending on the rules in effect at a
particular casino) then its just cashing a paycheck.
mark
I take exception to this phrasing! We want to keep the language.
It's the silly hype that we want to get rid of.
Richard Sargent Internet: ric...@pantor.UUCP
Systems Analyst UUCP: uunet!pantor!richard
I dont understand you. What prevents a C++, ObjectiveC or Eiffel
program from being reentrant ?
---------------------------------------------------------------------
Anders Bjornerstedt E-mail: and...@cuisun.unige.ch
Centre Universitaire d'Informatique
12 rue du Lac, CH-1207 Geneva
---------------------------------------------------------------------
Tel: 41 (22) 787.65.80-87 Fax: 41 (22) 735.39.05
Home: 41 (22) 735.00.03 Telex: 423 801 UNI CH
---------------------------------------------------------------------
Some forget that progress is only made when people are willing to try new
things, some of which will fail.
It is kind of scary to think that the Fortran model - one of our
first tries at programming - was the right way ;-).
Without the will to try new things we'd all be stuck with the Fortran model
-- or worse. OOP is just one of the ``new things'' (only 20 years old or
so) to try. Try it. Make up your own mind.
// marc
--
// Marco S. Hyman {ames,pyramid,sun}!pacbell!dumbcat!marc
Strong hyping is for weak whines...
Jonathan S. Shapiro
Silicon Graphics, Inc
Personally, I regard Fortran programming as bad style myself ( :-) ).
Seriously, what a load of ____! This is an apples and oranges argument. Of
course OOP is bad Fortran style - Fortran isn't an OOP. So what? I don't
see a point here.
> And more: "...the programmer is invited to pass the cost of expedience
> onto the user of the system. This wholesale sacrificing of runtime
> efficiency to programmer's convenience, this emphasis on the ease with
> which code is generated to the exclusion of the quality, usability, and
> maintainability of that code, is not found in any production programming
> environment with which I am familiar.
Programming convenience != sacrificing runtine efficiency.
Ease of maintenace != sacrificing runtime efficiency.
Should we sacrifice programming convenience and ease of maintenance for
improved runtime performance? I want them all, and most serious software
developers do, too. Any tradeoffs are made on a case-by-case basis. OOP is
a tool.
Many people can write efficient, maintainable code in a "convenient to use"
OOP. I know that I am capable of writing inefficient, unmaintainable code
in a "less-than convenient" language like C and Fortran. It's how you use
the tool as much as it is what the tools give you.
> Let's not forget the Intel
> 432...which was OOP in silicon, and it failed because it was just too
> slow. If we couldn't make OOP efficient in hardware, why do we think we
> can make it efficient when we emulate it in software?"
The 432 is not slow because it is "OOP in silicon." It was slow partly
because of its Hydra-based capability mechanisms (I said partly. There may
have been other reasons as well).
Don't forget that the 432 predates a lot of the current research in OOP.
Slow performace of one example != OOP is slow.
Sorry for clogging this group with my ramblings. I've made commentaries on
this (and other) groups in the past, but at least a network like this allows
dialog between the author and the readers. Sure, DDJ has a letters page,
but who knows if they'll publish letters against this article, or if they do
they might let the author give a reply like "Here are these idiot OO people
proving my point..." along with further nonsense. Grrr!
Thomas V. Frauenhofer ...!rutgers!rochester!cci632!ccird7!tvf
*or* ...!rochester!kodak!swamps!frau!tvf *or* ...!attctc!swamps!frau!tvf
"Had coffee with Melnick today. He talked to me about his idea of having all
government officials dress like hens." - Woody Allen
Software and BlackJack look a lot alike:
pay_per_hour = (rate_of_win * size_of_bet * bets_per_hr) / #_programmers
Assuming:
rate_of_win = .05 (5%)
size_of_bet = $100,000 (UNIX software source license fee)
bets_per_hr = 10 (How fast can I sell them?)
#_programmers = 1 (to maintain a UNIX...)
gives a pay_per_hour of: $50
>Of course, I don't have a pretty girl plying me with drinks while I program :-)
All things considered, BlackJack is much more fun.
Jon
For those of you who are not already conversant with continuations,
the idea comes from Scheme (though it has earlier antecedents), and
looks like this:
(call-with-current-continuation
(lambda (continuation)
(body...)))
the call-with-current-continuation form potentially returns multiple
times, because the continuation can be saved to a global variable. A
subsequent invocation can be done with:
(continuation value-producing-form)
which causes a return from the call-with-current-continuation with
that value.
Logically, the continuation captures the future state of the program
at the point at which it was captured. That is, it captures a pc, a
set of registers, and a stack. Note that this implies garbage
collection of stack frames, because a live frame in an enclosing
context can be shared between a continuation and subsequent code.
Continuations have seen use as error handlers, because they look like
function invocations. One can set up an exception handling routine,
capture it with a continuation, and use the continuation as a
"longjump" type mechanism to get out of a bad state.
Continuations are different from longjump in that they capture the
stack as well as the registers.
A detailed description can be found in the Revised Revised... Report
on scheme, which can be obtained via anonymous FTP from
zurich.ai.mit.edu.
Jonathan Shapiro
Silicon Graphics, Inc.
I do not believe that it is feasible to add continuations to C++ for
any number of reasons, but I would be interested to hear the reactions
in this community regarding their utility in object-oriented programming.
Hmmm. Something in this caught my interest, but I'm not quite sure what.
Taking a purely object-oriented stab at it, might the idea of continuations be
implemented as merely messaging a static data area? Is there more to it?
--
===================================================================
David Masterson Consilium, Inc.
uunet!cimshop!davidm Mt. View, CA 94043
===================================================================
"If someone thinks they know what I said, then I didn't say it!"
I do not believe that it is feasible to add continuations to C++ for
any number of reasons, but I would be interested to hear the reactions
in this community regarding their utility in object-oriented programming.
They are just as useful in object-oriented programming as in straight
Lisp or Scheme. However, continuations in the Scheme style are only
useful if full support is provided for lexical scoping. C and C++
have no lexical scoping whatsoever. So I agree that it is not
feasible to add them to C++. Even lexical scoping alone, with or
without continuations, would be a great benefit, but I'm sure I'll
never see it in C or C++.
How can you make this claim?
>However, continuations in the Scheme style are only
>useful if full support is provided for lexical scoping. C and C++
>have no lexical scoping whatsoever.
I just had to reply when I saw this. C and C++ are definitely "lexically
scoped" (I would prefer to call it statically scoped).
Peter.
---------------
Peter C. Damron
Dept. of Computer Science, FR-35
University of Washington
Seattle, WA 98195
pet...@cs.washington.edu
{ucbvax,decvax,etc.}!uw-beaver!uw-june!peterd
>I just had to reply when I saw this. C and C++ are definitely "lexically
>scoped" (I would prefer to call it statically scoped).
Yes, they are, but they don't provide support for closures as
first-class objects (the funarg problem). So they don't provide
"full" support for lexical scoping. Scheme/ML do.
--chet--
--chet--
mur...@cs.cornell.edu
I don't see why it would not be feasible, C++ has for loops and other
control structures, why not continuations as well?
As for object oriented continuations: A continuation is a technique for
modelling/creating flow of control. In pure object oriented systems, this
is done by message passing. Thus, a OO continuation would probably be something
which modelled a process: like a coroutine object, which you send messages
to (like resume). This is a half baked idea, but maybe worthwhile.
I do not know if you are talking about first-class continuations or not, but
in languages which are stack-based first-class continuations are not trivial
to add. (By stack-based I mean that the procedure entry exit sequance
behaves in a stack-like fashion) In C the idea of setjmp/longjump comes as
close to continuations as one can get. Admittedly, this is not as elegant as
Scheme's notion of continuation but workable in concept.
>
>Jonathan Shapiro
>Silicon Graphics, Inc.
Vinod Grover
Sun Microsystems.