Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Language Efficiency

13 views
Skip to first unread message

Robert C. Bethel

unread,
Apr 2, 1995, 4:00:00 AM4/2/95
to
Anyone know of research papers that deal with the subject
of language efficiency? By efficiency I mean the quality
of machine binaries (code space, execution time, etc.)
given an identical program coded in several languages.

Thanks
--
Robert...@CBIS.Com

Harold P Zbiegien

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to

Shouldn't this be labeled "compiler efficiency" There may not be anything
inherent between the efficiency of one language and another, but there sure
is differences between the generated code coming out of compilers.

Harold
American Greetings Corp.
A

D
C
C
A
C
Harold

Kennel

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to

This is heresy but I disagree.

Different languages make fundamentally different choices resulting
in very different kinds and amounts of knowledge that is provided to the
compiler.

Assuming an approximately constant level of knowledge and resources
between compiler groups this can make a huge difference in the speed
of executable code.


Bob Kitzberger

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
Robert C. Bethel (rob...@cbis.com) wrote:
: Anyone know of research papers that deal with the subject

: of language efficiency? By efficiency I mean the quality
: of machine binaries (code space, execution time, etc.)
: given an identical program coded in several languages.

If there were such papers, they would be suspect. A _language_ is
merely a paper definition of syntax and semantics; you must compare
language translation _implementations_ (i.e. compilers). And compiler
versions, coding styles, idioms, target platforms, etc.

--
Bob Kitzberger +1 (916) 274-3075 r...@rational.com
Rational Software Corp., 10565 Brunswick Rd. #11, Grass Valley, CA 95945
"...the solution to the problem is usually to pee on it" -- Dave Barry

Larry Kilgallen

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
In article <3lrrqk$k...@usenet.INS.CWRU.Edu>, bb...@cleveland.Freenet.Edu (Harold P Zbiegien) writes:
>
> In a previous article, rob...@cbis.com (Robert C. Bethel) says:
>
>>Anyone know of research papers that deal with the subject
>>of language efficiency? By efficiency I mean the quality
>>of machine binaries (code space, execution time, etc.)
>>given an identical program coded in several languages.
>>
>>Thanks
>>--
>>Robert...@CBIS.Com
>>
> Shouldn't this be labeled "compiler efficiency" There may not be anything
> inherent between the efficiency of one language and another, but there sure
> is differences between the generated code coming out of compilers.
>
> Harold
> American Greetings Corp.

Although efficiency of compilers is the controlling issue for most
environments (due to the low state of optimization for most widely
used compilers), language efficiency is a separate issue which would
be of interest if compilers were not masking the effects.

One opinion is that "higher level" languages with strong type checking
give the compiler a better chance to understand what the programmer
really intended and optimize accordingly.

Of course this also depends on the programmer having coded what was
really intended.

Finally, let me note that the two specific examples given by Robert,
"code space" and "execution time" are often at odds with each other,
so a robust compiler will let the user indicate a preference.

Larry Kilgallen

Mitch Gart

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
Robert C. Bethel (rob...@cbis.com) wrote:
: Anyone know of research papers that deal with the subject

: of language efficiency? By efficiency I mean the quality
: of machine binaries (code space, execution time, etc.)
: given an identical program coded in several languages.

About 10 years ago I wrote an article on "Benchmarking Ada, C,
and Pascal". I translated a set of small programs called
the Hennessy Benchmarks from Pascal to Ada and C, and then
compared the execution time efficiency of the programs when
compiled and run on a PC. The results looked good for Ada:
a few programs were a little faster in Ada, a few others were
a little faster in C, but overall there were no large differences
that made one language look more or less efficient than the
others.

I could dig out the paper if anyone wants to see it.

But as other people posting responses to this question have
noted, comparisons like this compare compilers as much as
languages. My paper compared Alsys Ada to Lattice C and
Turbo Pascal on an old 16-bit Intel 286 PC-AT machine. At the time
the PC-AT was the most advanced PC, and Lattice C was one of
the best C compilers for the PC. Technology has changed a lot
since then. In particular, I think C compiler vendors have made
dramatic improvements in their optimizers in the last ten years.

Mitch Gart

David Weller

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In article <D6K81...@oasis.icl.co.uk>,

Mike Wilson <m...@oasis.icl.co.uk> wrote:
>> If there were such papers, they would be suspect. A _language_ is
>> merely a paper definition of syntax and semantics; you must compare
>> language translation _implementations_ (i.e. compilers). And compiler
>> versions, coding styles, idioms, target platforms, etc.
>
>Yes, but I could generalise and say that some languages such as 'C'
>were designed with efficiency in mind. That's not to say a rotten
>compiler wouldn't produce huge, slow code. There _are_ very efficient
>COBOL compilers around. I've heard ICL's VME COBOL compiler produces
>very efficient code (but I don't have any first-hand knowledge of it).
>

C was designed to be quite efficient for machines in the late 70's.
I'd hardly say it has a lock on the "efficiency" bag anymore. Then
again, I would never claim that _any_ language has a lead on
efficiency. Some languages have constructs that can "assist"
efficient compiler implementations (array slicing comes to mind, and
it's not in C), but that doesn't mean the compiler _writer_ has to
choose an efficient implementation.

Bottom line: It's _not_ the language, but the language _can_ give
hints :-)

However, to end the argument once-and-for-all, I did a comparison of
Ada and C++. I ran both sources on top of a Pentuim system.....

All it did was make the source listings a little warmer :-)

--
Frustrated with C, C++, Pascal, Fortran? Ada95 _might_ be for you!
For all sorts of interesting Ada95 tidbits, run the command:
"finger dwe...@starbase.neosoft.com | more" (or e-mail with "finger" as subj.)
if u cn rd ths, u r gd enuf to chg to Ada :-)

Larry Kilgallen

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In article <D6K81...@oasis.icl.co.uk>, m...@oasis.icl.co.uk (Mike Wilson) writes:

> Yes, but I could generalise and say that some languages such as 'C'
> were designed with efficiency in mind. That's not to say a rotten
> compiler wouldn't produce huge, slow code. There _are_ very efficient
> COBOL compilers around. I've heard ICL's VME COBOL compiler produces
> very efficient code (but I don't have any first-hand knowledge of it).

Actually, as I understand it efficiency works the other way around.
As a low level language, C lets a programmer considerably affect the
generated code when a non-optimizing compiler is used. Switch to a
different compiler or move to a different instruction set and what
was formerly optimal is no longer optimal.

Likewise, if you use optimizing compilers high level languages will
produce better results than C.

Obviously the lowest level languages processors which directly express
machine instructions to be used leave the least opportunity for any
optimization of what the programmer has provided. (Although for
Alpha VMS, DEC did have to build a compiler for the VAX assembly
language, and I think there is at least one optimization which can
be enabled.)

Larry Kilgallen

Mike Wilson

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
> If there were such papers, they would be suspect. A _language_ is
> merely a paper definition of syntax and semantics; you must compare
> language translation _implementations_ (i.e. compilers). And compiler
> versions, coding styles, idioms, target platforms, etc.

Yes, but I could generalise and say that some languages such as 'C'


were designed with efficiency in mind. That's not to say a rotten
compiler wouldn't produce huge, slow code. There _are_ very efficient
COBOL compilers around. I've heard ICL's VME COBOL compiler produces
very efficient code (but I don't have any first-hand knowledge of it).

--------------------------------------------------------------------
Mike Wilson m...@oasis.icl.co.uk
ICL Medical Portfolio, Kings House, Kings Road, Reading, RG1 3PX, UK

Lawrence Free/ A.F. Software Services

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
Robert C. Bethel (rob...@cbis.com) wrote:
: Anyone know of research papers that deal with the subject
: of language efficiency? By efficiency I mean the quality
: of machine binaries (code space, execution time, etc.)
: given an identical program coded in several languages.

: Thanks
: --
: Robert...@CBIS.Com

A problem with language efficiency also deals with the programmer's skills.
I had a customer who asked how I changed the COBOL to have my application
run 5 - 10 times faster than the equivalent from another vendor. I informed
him that the language and compiler were the same, the programming different.

Lawrence A. Free | Email: fr...@mcs.com
A.F.Software Services, Inc. | Your MS/DOS, UNIX
(708) 232-0790 | business software developer


Ray Toal

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In article <3ls7u0$3...@stc06.ctd.ornl.gov> m...@jt3ws1.etd.ornl.gov (Kennel) writes:

>Harold P Zbiegien (bb...@cleveland.Freenet.Edu) wrote:

>> In a previous article, rob...@cbis.com (Robert C. Bethel) says:

>> >Anyone know of research papers that deal with the subject
>> >of language efficiency? By efficiency I mean the quality
>> >of machine binaries (code space, execution time, etc.)
>> >given an identical program coded in several languages.
>> >
>> >Thanks
>> >--
>> >Robert...@CBIS.Com
>> >

>> Shouldn't this be labeled "compiler efficiency" There may not be anything
>> inherent between the efficiency of one language and another,
>> but there sure
>> is differences between the generated code coming out of compilers.

>This is heresy but I disagree.

>Different languages make fundamentally different choices resulting
>in very different kinds and amounts of knowledge that is provided to the
>compiler.

Actually what is *more* important is what the language definitions *allow*
compilers to get away with. Fortran compilers are allowed to assume that
two arrays passed to the same subroutine/function do not alias each
other (even if they do). So you'll have fast running code but you'll
have to "trust the programmer". I'm not really sure why there would
(in principle) be a difference in execution time between a Fortran
program that was compiled without assuming "noalias" and an Ada program
in which all checks were supressed. Alas, 30+ years of Fortran compiler
technology shows that in practice there usually is...

>Assuming an approximately constant level of knowledge and resources
>between compiler groups this can make a huge difference in the speed
>of executable code.

Strong typing, and for that matter good uses of subtyping in Ada, of course
help a good compiler optimize away checks. But when you compare Ada
runtimes to those with languages that are essentially unchecked there
are differences.

Ray Toal


Ruthifren

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
> Likewise, if you use optimizing compilers high level languages will
> produce better results than C.

Is it just me or is our perception of compiler technology far beyond the
reality?

___________________________________________________________
Jon Gamble: lowly CS student UAH

Larry Kilgallen

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
In article <3lvru0$b...@mars.earthlink.net>, klei...@earthlink.net (Ken
Leidner) writes:

> I think that a "good" programmer in a given language has more to do with it
> than what the language is, unless the language just can't support the
> problem at all. An understanding of what the compiler is going to do has a
> lot to do with what the code comming out of the back side looks like.

Unfortunately that is true for a lot of compilers, but if a
programmer actually writes a high level language program (as
distinguished from writing a low level language program with
a high level language compiler), a good optimizing compiler
should be able to make up for any lack of knowledge which
the programmer has about compiler behaviour.

Not to pick on Fortran, but just to pick on Fortran programmers :-),
it has been said that someone with 20 years of experience just
programming Fortran can readily adapt to _any_ computer language
and write working programs that still look like Fortran.

Larry Kilgallen

--
| Fidonet: Larry Kilgallen 1:270/211...@fidonet.org
| Internet: 270-211-777!Larry.K...@mdtnbbs.com
| Standard disclaimer: The views of this user are strictly his own.
| From Mdtn_BBS 1 717 944-9653 [ In the Heart of Three Mile Island ]

Ken Leidner

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
In article <3lmt64$s...@dplanet.p2k.cbis.com>, rob...@cbis.com says...
>Language Efficiency

A part of this question that people seemed to miss is that some languages
are better to code a type of program than the same program in a different
lanaguage. Its part of the reason why we have more that one language. Each
lanaguage was designed to allow the programmer to define a set of "problems"
easily. FORTRAN to translate mathematical formulas, COBOL for bussines,
SOBOL for text searching ...

I think that a "good" programmer in a given language has more to do with it
than what the language is, unless the language just can't support the
problem at all. An understanding of what the compiler is going to do has a
lot to do with what the code comming out of the back side looks like.

Ken Leidner
Systems Progammer
klei...@earthlink.net

Robert Dewar

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
"Finally, let me note that the two specific examples given by Robert,
"code space" and "execution time" are often at odds with each other,
so a robust compiler will let the user indicate a preference."

yes, people often say this, but in my experience it is much less true
than people expect. The pragma Optimize (Space|Time) of Ada has very
seldom had much effect.

Most modern highly optimizing compilers do NOT in fact have a selection
for time vs space. That's because in practice, reducing the number of
instructions reduces the execution speed on modern RISC machines.

Sure you can find examples, e.g. pulling things out of loops, where this
is theoretically the case, but I still think that in practice, over a big
chunk of code, there is not the kind of dichotomy that the quote above
suggests.


Larry Kilgallen

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
In article <3lvru0$b...@mars.earthlink.net>, klei...@earthlink.net (Ken Leidner) writes:

> I think that a "good" programmer in a given language has more to do with it
> than what the language is, unless the language just can't support the
> problem at all. An understanding of what the compiler is going to do has a
> lot to do with what the code comming out of the back side looks like.

Unfortunately that is true for a lot of compilers, but if a

Robert Dewar

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
Please substantiate the "heresy" that different languages make a big
difference in generated code efficiency.


Ross Driedger

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
Crookie <g...@mfltd.co.uk> wrote:
>>de...@cs.nyu.edu (Robert Dewar) wrote:
>>
>>>Ross Driedger says:
>>>
>>>"All I need to do is load my Smalltalk enviroment, type "2 + 2" in the
>>>transcript and invoke the 'Show It' command. Three seconds later, I get
>>>an answer."
>>>
>>>That is completely irrelevant, what you see in a particular implementation
>>>of a particular language does not necessarily have anything whatsoever to
>>>do with the languages involved.
>>>

Nonsense. Smalltalk being a true OOL and small integer being 4 layers of
lineage from Object has everything to do with the matter. If all I want to
do is move the number 2 into the EAX and add a memory address that contains
the value 2 and output it, then the concepts of the language make all the
difference. I don't even want to know the steps involved in doing this in
Smalltalk -- life's too damn short. Do you have any guesses on how long it
takes in C or Assembler or Pascal or Lisp, even?

>>>TO make the argument that there is some inherent difference in language
>>>efficiency, you have to give very specific examples of semantic features
>>>and argue that it is impossible to implement comparable features with
>>>comparable efficiency. What any particular implementations happen to do
>>>is not part of the discussion.

Just did. I have personal experience on this, too. I worked on a Smalltalk
project that was cancelled because the client's 486 took 3 minutes to load
the image in demo. A comparable app in C++ would have been up in less than
5 seconds.

>>er, i thought the inherent difference in S/T (as opposed to Object COBOL
>>or C++) is that it does _everything_ by messages. even 2+2.

Precisely.

This is not to say the Smalltalk can be a useful language under the proper
circumstances (high powered hardware, optimized OS and compiler and an
app that is not computationally intensive), but I cannot think of application
where C++ will end up with less efficient machine code than Smalltalk.

This thread reminds me those on comp.lang.smalltalk where every now and then
a few weenies try to convince everyone that Smalltalk can produce code that
is as fast as C++ (or C, even!). There are a few realists who come
out with a reality check.

If I need computational speed I won't use Smalltalk _because_ it is an OOL.
I might use C++ because I can revert to C or Assembler if I need to. (besides
it is not an OOL in the pure sense)

Ross Driedger
cs4g...@maccs.dcss.mcmaster.ca


"The whole problem with the world is that fools and fanatics are always so
certain of themselves, but wiser people so full of doubts." (B. Russell)

Kennel

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
Ruthifren (ruth...@aol.com) wrote:
> > Likewise, if you use optimizing compilers high level languages will
> > produce better results than C.

> Is it just me or is our perception of compiler technology far beyond the
> reality?

Fortran.

In this respect, Fortran *is* high level in the important notion
that its pointers are nonexistent (in f77) or well controlled (Fortran 90).

Fortran arrays are indexed through integers, and no other way.

This restriction vs. C allows more optimization, because the compiler
can assume more. As in all mathematics, more assumptions lead to more
powerful and specific theorems.

The theorem here is "this object code is an implementation of this
source", assumptions are 'element A(i) is not at the same location
as B(j) for i .ne. j', which holds not for C.

matt


Kennel

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
Robert Dewar (de...@cs.nyu.edu) wrote:
> Please substantiate the "heresy" that different languages make a big
> difference in generated code efficiency.

Different assumptions built into one language vs. another make some
implementation strategies that are simple and easy in one language either
uneconomically difficult {usually} or outright impossible.

mbk

Robert Dewar

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
Kennel says:

"Different assumptions built into one language vs. another make some
implementation strategies that are simple and easy in one language either
uneconomically difficult {usually} or outright impossible."

This isn't substantiation, it is just vague expression of opinion with
no substance. Can you give SPECIFIC examples of what you mean. While I
can think of marginal instances of the above principle, I think as a
general statement of the situation it is completely WRONG!

I obviously can't give examples of the absence of a phenomenon, so really
you need to give some examples of its presence!


Ross Driedger

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
m...@jt3ws1.etd.ornl.gov (Kennel) wrote:
>>Robert Dewar (de...@cs.nyu.edu) wrote:
>>> Please substantiate the "heresy" that different languages make a big
>>> difference in generated code efficiency.
>>
>>Different assumptions built into one language vs. another make some
>>implementation strategies that are simple and easy in one language either
>>uneconomically difficult {usually} or outright impossible.

All I need to do is load my Smalltalk enviroment, type "2 + 2" in the


transcript and invoke the 'Show It' command. Three seconds later, I get
an answer.

Good thing I'm not doing fractal calculations in Smalltalk.

Robert Dewar

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
Ross Driedger says:

"All I need to do is load my Smalltalk enviroment, type "2 + 2" in the
transcript and invoke the 'Show It' command. Three seconds later, I get
an answer."

That is completely irrelevant, what you see in a particular implementation


of a particular language does not necessarily have anything whatsoever to
do with the languages involved.

TO make the argument that there is some inherent difference in language

Robert Dewar

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
Robb Nebbe says:

Compare Smalltalk with Ada. There are choices made in Smalltalk that
introduce an overhead that is not present in Ada. On the otherhand
if you compare C++ and Ada the langauges are close enough that the
quality of the implementation is likely to dominate rather than
any differences in the language.

Again, a general statement with no specifics. Sure the easiest place to look
for differences of the kind we are talking about is to compare languages
that are generally implemented with interpretors with languages that are
generally compiled. However such comparisons are misleading, an interpreted
C can be much slower than compiled Lisp, to take examples of things that
exist.

Current Smalltalk implementations are dreadfully inefficient, it would
certainly be possible to compile MCUH more effcient code in Smalltalk than
is currently done. Note that the fluid environment of a Smalltalk system
is an artifact of the implementation, desirable for development and
debugging, but antagonistic to efficiency.

(P.S. this is an old area for me, you may not be aware that I developed the
SPITBOL implementations of fast compiled SNOBOL4, which showed that many
assumptions about what can and cannot be compiled efficiently are not
always correct).

Anyway, Robb, again, general statements that there ARE differences don't
contribute to the discussion unless you give specific examples. I can't
give specific examples of something that I think does not exist!


David Weller

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In article <dewar.797513130@gnat>, Robert Dewar <de...@cs.nyu.edu> wrote:
>Current Smalltalk implementations are dreadfully inefficient, it would
>certainly be possible to compile MCUH more effcient code in Smalltalk than
>is currently done. Note that the fluid environment of a Smalltalk system
>is an artifact of the implementation, desirable for development and
>debugging, but antagonistic to efficiency.
>
As you've pointed out, Robert, don't make assumptions about
implementations :-) Envy's solution to Smalltalk compacts the
runtime down to an efficient executable image. They tend to blow
away people's perception of Smalltalk's efficienty. I wouldn't claim
it's ready for "real-time" yet (though others do!), but it's
certainly snappy.

>(P.S. this is an old area for me, you may not be aware that I developed the
>SPITBOL implementations of fast compiled SNOBOL4, which showed that many
>assumptions about what can and cannot be compiled efficiently are not
>always correct).
>

Or that you had a pretty firm hand in the compiler for CA-Realia
COBOL :-)

Robb Nebbe

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In article <dewar.797513130@gnat>, de...@cs.nyu.edu (Robert Dewar) writes:

|> Again, a general statement with no specifics. Sure the easiest place to look
|> for differences of the kind we are talking about is to compare languages
|> that are generally implemented with interpretors with languages that are
|> generally compiled. However such comparisons are misleading, an interpreted
|> C can be much slower than compiled Lisp, to take examples of things that
|> exist.

Among other things type information is very useful for optimization. In
the context of separate compilation the lack of type information in
Smalltalk creates overhead that you don't have enough information to
get rid of. Obviously if you have the entire program then, at least
theoretically, you could produce the same code since you could propagate
type information (after having infered it) but I would expect that to
be far from trivial.

Robb Nebbe


Crookie

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
er, i thought the inherent difference in S/T (as opposed to Object COBOL
or C++) is that it does _everything_ by messages. even 2+2.

--
Crookie
g...@mfltd.co.uk

Robert Dewar

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
Ross, your four levels are what you see in *your* Smalltalk, rest assured,
it is *not* impossible to implement integer addition efficiently in
Smalltalk. No guesses are needed, the strategy is straightforward. Sure
it is more work than in Pascal, but that is about all you can say.


Robert Dewar

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
Look if Ross Driedger wants to define OOL as not including C++, why argue
the point. People can make words mean anything they want. Of course if they
choose a peculiar definition, like this, they had better make people sure
that they understand the unusual usage (as Ross did).

Arguing about the meaning of technical terms is silly.

Now of course there can be a hidden agenda here

OOL languages are good
C++ is not an OOL language
Therefore C++ is bad

This logic is of course doubly flawed :-)


Robert Dewar

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In smalltalk it is true that everything is done by messages at the semantic
level, but that does not mean you have to implement 2+2 that way!

After all in Ada, if A and B are of type integer, then the official semantics
says that

A := B;

requires a constraint check to make sure that the value B is in range of
type Integer. Well we certainly don't expect to see *code* for such a check
generated in a decent Ada compiler.

Similarly in COBOL

MOVE A TO B

semantically requires all sorts of things that a good compiler can shortcut.

Lawrence Free/ A.F. Software Services

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
Richard Ladd Kirkham (gt2...@prism.gatech.edu) wrote:
: In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
: >
: > C++ . . . is not an OOL in the pure sense)

: Why?

: --
: Richard Ladd Kirkham
: Georgia Institute of Technology, Atlanta Georgia, 30332
: uucp: ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt2782a
: Internet: gt2...@prism.gatech.edu

I'm not sure about the original poster's reasoning but 1. You can do
object oriented programming in both C and C++, and 2. You can avoid doing
object oriented programming in both C and C++. I have not dealt with a
language that forces the issue (there may be some). You can use object
oriented techniques in most computer languages. In COBOL it's efficient,
in C it's recommended, in assembler, we call it survival.

Richard Ladd Kirkham

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
(Lawrence Free/ A.F. Software Services) writes:
>Richard Ladd Kirkham (gt2...@prism.gatech.edu) wrote:
>: Ross Drieger (spell?) wrote:
>: >
>: > C++ . . . is not an OOL in the pure sense)
>
>: Why?
>
>I'm not sure about the original poster's reasoning but 1. You can do
>object oriented programming in both C and C++, and 2. You can avoid doing
>object oriented programming in both C and C++. I have not dealt with a
>language that forces the issue (there may be some). You can use object
>oriented techniques in most computer languages. In COBOL it's efficient,
>in C it's recommended, in assembler, we call it survival.
>
Now I'm more confused. If there's really a difference between OOP and
mere structural, then there are damned few object oriented techniques
that can be used in "most computer languages". To mention just one
example, does COBOL have generic functions, what C++ calls templates?
Does plain C?

David Weller

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In article <3mcfbf$p...@acmez.gatech.edu>,

Richard Ladd Kirkham <gt2...@prism.gatech.edu> wrote:
>In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
>>
>> C++ . . . is not an OOL in the pure sense)
>
>Why?
>

I vote we kill this thread before we even start it. I've watched
this whole silly thing about "pure" vs. "impure" languages so much in
comp.object I just wanna puke. The phrase "<X> language is not an
OOL in the pure sense" is only nonsense. IMHO, it reveals the
poster's ignorance of the history of programming languages.

<Walking away, grumbling>

Richard Ladd Kirkham

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
>
> C++ . . . is not an OOL in the pure sense)

Why?

--

Robb Nebbe

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
In article <dewar.797469506@gnat>, de...@cs.nyu.edu (Robert Dewar) writes:

|> Kennel says:
|>
|> "Different assumptions built into one language vs. another make some
|> implementation strategies that are simple and easy in one language either
|> uneconomically difficult {usually} or outright impossible."
|>
...

|>
|> I obviously can't give examples of the absence of a phenomenon, so really
|> you need to give some examples of its presence!

Compare Smalltalk with Ada. There are choices made in Smalltalk that


introduce an overhead that is not present in Ada. On the otherhand
if you compare C++ and Ada the langauges are close enough that the
quality of the implementation is likely to dominate rather than
any differences in the language.

Robb Nebbe

Fernando Mato Mira

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
In article <3mcoh6$a...@Starbase.NeoSoft.COM>, dwe...@Starbase.NeoSoft.COM (David Weller) writes:
> In article <3mcfbf$p...@acmez.gatech.edu>,
> Richard Ladd Kirkham <gt2...@prism.gatech.edu> wrote:
> >In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
> >>
> >> C++ . . . is not an OOL in the pure sense)
> >
> >Why?
> >
>
> I vote we kill this thread before we even start it. I've watched
> this whole silly thing about "pure" vs. "impure" languages so much in
> comp.object I just wanna puke. The phrase "<X> language is not an
> OOL in the pure sense" is only nonsense. IMHO, it reveals the
> poster's ignorance of the history of programming languages.

Good point. Purity is irrelevant. However, I would be tempted
to call all single-dispatch languages `subject-oriented', and
only those with multi-methods `object-oriented'.

--
F.D. Mato Mira http://ligwww.epfl.ch/matomira.html
Computer Graphics Lab mato...@epfl.ch
EPFL FAX: +41 (21) 693-5328


Ross Driedger

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
de...@cs.nyu.edu (Robert Dewar) wrote:
>>In smalltalk it is true that everything is done by messages at the semantic
>>level, but that does not mean you have to implement 2+2 that way!

Even Smalltalk's proponents, if they have any credibility, admit that there
is a major performance hit. I'm not going to buy a Pentium just so I can
pretend I am using 486 with Smalltalk.

After the last few posts I checked into my documents from the OOPSLA conference
in '92 regarding writing efficient Smalltalk. The author claims at best that
well written Smalltalk ranges from half to one fifteenth the speed of C.

But I am suspicious of this because all his 'examples' were well written Smalltalk
vs. C that had some major problems: there were bugs or there was a lack of
good C libraries. Never was well implemented C or C++ discussed.

I know this post doesn't represent any authoritative discussion on the subject;
it is just my experience that Smalltalk doesn't produce the nice crisp executable
that properly implemented C++ does. Clients notice when screen redraws are mushy
or the load times seem long. Language purists might care about matters such as
whether the inheritance is multiple or not -- clients don't and ultimately it is
the client that is going to pay me. If the language I use is an ugly mongrel
that still produces a good quality image and is relatively safe if I use common
sense and self discipline, I will consider its use.

Smalltalk, as implemented for the PC, doesn't qualify at this time. Why? Just
add 2 + 2 in a transcript and ask yourself if you can live with the time it took
to evaluate it. I can't.

Larry Kilgallen

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
In article <dewar.797512974@gnat>, de...@cs.nyu.edu (Robert Dewar) writes:

> TO make the argument that there is some inherent difference in language
> efficiency, you have to give very specific examples of semantic features
> and argue that it is impossible to implement comparable features with
> comparable efficiency. What any particular implementations happen to do
> is not part of the discussion.

Consider a type of number ranging from 0 to 17 in Ada and also
some other language. In Ada, the programmer has specified the
range of the number, whereas for the sake of discussion presume
the other language does not have that ability.

The programmer has thus given the Ada compiler "more information"
(not her fault) about the nature of the application.

If both compilers target a RISC processor where 32 bits is the
smallest multiply available, the Ada compiler can multiply two
such numbers without fear of hardware arithmetic overflow, only
checking the _result_ for legality. On many processors, hardware
exceptions are much more difficult to handle, but because the Ada
program had _more_information_ about what the programmer _really_
wanted, waiting until things get to the hardware exception stage
is not required.

I am not a compiler expert, but the general idea is that the more
data given to a compiler, the better it is able to implement
efficiently. A language may provide more or less ability to express
exact desires to a machine.

Does that make machine language programming most efficient? Only
in cases where the programmer in question can faultlessly take into
account all of the instruction scheduling nuances of a particular
processor. And doing by hand what machines (compilers) can do faster
is no fun.

Larry Kilgallen

Bill Winn

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
In article <3mcior$b...@Mars.mcs.com>, fr...@MCS.COM wrote...
<snip>

>I'm not sure about the original poster's reasoning but 1. You can do
>object oriented programming in both C and C++,

Technically speaking object oriented programming requires inheritance
(amoung other things), which C does not support, but C++ does. Therefore
one cannot do OOP in C.


Bill Winn
Software Engineer - Analysts International Corporation
-------------------------------
wjw...@kocrsv01.delcoelect.com
ww...@klingon.iupucs.iupui.edu
My views do not express the views of anyone except my alter-ego.
Janet! Brad! Janet! Dr. Scott! Rocky! Uhh!
- Brad, Janet, Dr. Scott, Janet, Dr. Frankenfurter, Rocky
"Rocky Horror Picture Show"


Ross Driedger

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to

Fine! The interior compiler details are hidden from my view, because
that's the way the enviroment is implemented.

What I do know for certain is that the steps that are undertaken to do
a simple addition of two small integers is far more complex than what derives
from Assembly, C or C++. If I want an integer object and can afford the
cycles, then I'm happy to use what I'm provided. But sometimes I just want to
add 2 to 2 and I want to do it with much less overhead than Smalltalk provides.

C++'s strength is that it doesn't force me into an overhead laden method when
I can't afford it. I prefer the pragmatic approach; C++ gives me the option
of doing things in ways that are appropriate for the situation. My pragmatism
doesn't stop there; I'll use Smalltalk when its weaknesses are not exploited
by the problem.

It's just unrealistic to think that intrinsic problems with any language can
be 'optimized out'. Smalltalk's major problem is computational inefficiency.

Ross Driedger

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
de...@cs.nyu.edu (Robert Dewar) wrote:
>>Look if Ross Driedger wants to define OOL as not including C++, why argue
>>the point. People can make words mean anything they want. Of course if they
>>choose a peculiar definition, like this, they had better make people sure
>>that they understand the unusual usage (as Ross did).

Hey, dude, chill! What I'm saying is not much different than what Meyer
or Lalonde say when talking about their 'pet' languages. My view differs
with these two gents in that I believe:
(1) it is not necessary to use a 'pure' language,
(2) it is important to choose an appropriate tool for a problem,
(ex: C is not my language of choice to implement an RDB)
(3) go with what works best for you.

>>Arguing about the meaning of technical terms is silly.

My God! We actually agree on something! ;-)

>>Now of course there can be a hidden agenda here

What 'hidden agenda'? I'm on this group to gain from others' experience in C++.

>>OOL languages are good
>>C++ is not an OOL language
>>Therefore C++ is bad

Oh, please! Don't go interpolating things that are not there. I rate the
'goodness' or 'badness' of a language by how much cash I can potentially make by
slinging its code. By that standard, C++ is a pretty good language.

>>This logic is of course doubly flawed :-)

In view of my feelings on the three previous statments you interpolated on me,
this one is garbage if it is also directed at me. I'll give you the benefit of
the doubt and believe the smiley was sincere....

Bill Winn

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
I forgot something important in my last posting:

Although one cannot do OOP in C or Ada83, one can do OO Analysis and Design
and capture that in C or Ada83 via object-based coding (analogous to
programming with ADTs).

Sorry about the C intrusion (and Ada83 for you C++'ers)...Now back to your
scheduled language discussions

Bill Winn
Software Engineer - Analysts International Corporation
-------------------------------
wjw...@kocrsv01.delcoelect.com
ww...@klingon.iupucs.iupui.edu

My views do not express the views of anyone except my alter-ego.

"Where ever you go, there you are."
- Buckaroo Bonzai


Robert Dewar

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
"Good point. Purity is irrelevant. However, I would be tempted
to call all single-dispatch languages `subject-oriented', and
only those with multi-methods `object-oriented'."

You can be tempted (and indeed go ahead and use) any old peculiar
terminology you want, but arguing about terminology is totally
irrelevant.

Unless again, you are using this invalid argument technique of
implying quality by terminology:

object oriented is good
single dispatch languages are not OO
therefore single dispatch languages are not god

(interesting mistype in last word, I think I will leave it in :-)

Obviously this implied logic is bogus, so we will assume that the total
content of your post was simply about terminology. But that's pointless
to discuss.

So what is your *technical* point here, if any?


Matt Austern

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
In article <3meiee$8...@kocrsv08.delcoelect.com> wjw...@kocrsv01.delcoelect.com (Bill Winn) writes:

> Technically speaking object oriented programming requires inheritance
> (amoung other things), which C does not support, but C++ does. Therefore

> one cannot do OOP in C.

C doesn't have inheritance built in, but it is possible to do it
manually. That's also true of polymorphism and dynamic type
dispatching. I have seen object oriented C code.

I like to make the distinction between object oriented design (a
method of design that focuses on object identity and relationships
rather than on algorithms), object oriented programming (a programming
style, used to implement object oriented designs, that involves
encapsulation, data abstraction, inheritance, and polymorphism), and
object oriented languages (languages which provide specific built-in
support for object oriented programming).

Sometimes it's also useful to distinguish between languages that
provide direct support for object oriented programming as well as
other styles of programming, and languages that try to encourage
object oriented programming above all other styles. C++ is clearly
one of the former.

--

--matt

Robert Dewar

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
"Although one cannot do OOP in C or Ada 83"

a peculiar statement, but I guess you can manage to define OOP in a narrow
way that makes this a true, but not particularly useful, statement.


Robert Dewar

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
Ross, you still talk about your Smalltalk *implementation* as though it
provided definitive answers on the efficienty of the *language* (please note
the title of this thread). It does not, it merely tells you about how
efficiently your vendor has implemented Smalltalk -- most interesting if
you have to use that implementation, but not really relevant to this
thread of discussion.


Robert Martin

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:

>If I need computational speed I won't use Smalltalk _because_ it is an OOL.
>I might use C++ because I can revert to C or Assembler if I need to. (besides
>it is not an OOL in the pure sense)

Being an OOPL is irrelevant to speed. Smalltalk is slow (relatively
speaking) because it is interpreted, and because messages are looked
up in hash tables (or the equivalent).

Purity of OO (Whatever that may be) also has nothing to do with speed.
Messages, after all, are really just function calls. Polymorphic
messages need be nothing more than calls to functions via function
pointers. In C++, this is exaclty what they are.

Whether C++ is pure or not is irrelevant. It is not it's impurity
that makes it faster. Rather it is the mechanism behind virtual
member function dispatch.

--
Robert Martin | Design Consulting | Training courses offered:
Object Mentor Assoc.| rma...@rcmcon.com | Object Oriented Analysis
2080 Cranbrook Rd. | Tel: (708) 918-1004 | Object Oriented Design
Green Oaks IL 60048 | Fax: (708) 918-1023 | C++

Robert Martin

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
Crookie <g...@mfltd.co.uk> writes:

>er, i thought the inherent difference in S/T (as opposed to Object COBOL
>or C++) is that it does _everything_ by messages. even 2+2.

The notion that in Smalltalk everything is objects and messages is
often touted as the justification for calling it a pure OOPL. But the
concept is nonsense. Methods are just functions. Messages are just
function calls. Polymorphic message dispatch is just calling
functions through pointers.

The difference between an OO language and a non OO language is that an
OO language provides facilities for managing polymorphic dispatch and
for encapsulating functions and data structures.

Fernando Mato Mira

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to

I didn't mean to imply that single-dispatch languages are not good.
The terminology might be `peculiar' but would probably be more exact
(too late to change, I guess).

The point is, a lot of people have no idea of the existence of
multiple dispatch, and that's why you find the typical `peculiar'
reverse OO syntax in most OO languages.
And then you find yourself with things like CORBA, were it seems
that no thought whatsoever was put about this issue.

As I said before, Ada scored by keeping a
traditional procedural (functional) syntax, IMHO.
Combine this with the above and what I am saying
is that `Ada is both good and right'

Of course, that doesn't mean that I don't think that
having multiple dispatch in Ada wouldn't be better
and that the constraining rule isn't more complicated
than a simple multiple-dispatch rule, and consequently, wrong (the rule) :-)

Ada folks, the point was not to go head-banging about this.
This was a general ontological crosspost, answering to somebody
who is trying to get a good conceptual base about 00P..

PS: Dewar, you said `old peculiar terminology'. I don't
know what's up with USENET, but it often happens that
a subject will appear just a couple of days I start meditating
about some issue. I couldn't determine if I had heard
this `subject-oriented' term before. Got any references?
Thanks.

PS2 (yuck :-)) : Oh, yeah, I got email from a couple of
CLOS fans who _love_ the peculiar terminology (but alas,
they also like peculiar languages, it seems.. ;-) )

Ross Driedger

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to

I have hope for this thread, considering the last three posts.

I never thought I would have started such a thread by commenting,
in a very offhand way, about the 'purity' of C++ in regards to OOness.
The important thing that Matt and Bill have pointed out is the design
and view of the implementers that make or break a project's OO nature.

Too often, and I'm sure many of you have seen it, someone with a tenuous
grasp of C++ syntax will think (s)he is doing OO programming just because
it is C++.

David Weller

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
In article <1995Apr11....@eisner.decus.org>,

Larry Kilgallen <kilg...@eisner.decus.org> wrote:
>
>I am not a compiler expert, but the general idea is that the more
>data given to a compiler, the better it is able to implement
>efficiently. A language may provide more or less ability to express
>exact desires to a machine.
>
Sadly, this is exactly the opposite of the "popular" perception.
Seems that most people think that terse words and symbols, coupled
with types that "fit" (int, float, long float) is what one needs to
be "efficient".

Curtis Bass

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
gt2...@prism.gatech.edu (Richard Ladd Kirkham) wrote:
>
> In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
> >
> > C++ . . . is not an OOL in the pure sense)
>
> Why?

Well, C++ does directly support all the mechanisms necessary
for object-orientation (encapsulation, abstraction, polymorphism,
and inheritance), but it still allows you to write procedural
(i.e. structured) code without requiring an object-oriented
design or implementation. In a "pure" OOPL, such as Smalltalk,
EVERYTHING is an object, and the object-oriented paradigm is
the ONLY one supported -- you cannot write strictly procedural
code in a 'pure" OOPL.

So, C++ is definitely an OOPL, but IS "impure" in that it is
also structured/procedural.

It's sort of like BASIC -- you CAN write structured code in
most modern BASIC dialects, using procedures and parameter-
passing mechanisms with local variables, but you can also
write old-fashioned, unstructured spaghetti BASIC, using
GOTO's, GOSUBS, line numbers, and all global variables. So,
we can say that BASIC is not a "structured" language in the
"pure" sense.

> --
> Richard Ladd Kirkham
> Georgia Institute of Technology, Atlanta Georgia, 30332
> uucp: ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt2782a
> Internet: gt2...@prism.gatech.edu


Curtis Bass
Software Systems Specialist II
University of Texas Medical Branch


Ross Driedger

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
rma...@rcmcon.com (Robert Martin) wrote:

>>Being an OOPL is irrelevant to speed. Smalltalk is slow (relatively
>>speaking) because it is interpreted, and because messages are looked
>>up in hash tables (or the equivalent).

Does that include when the image has been stripped of the extraneous
classes? My experience is with the incremental compilers of the
PC world. Is even the finished .exe considered an interpreted program?

>>Purity of OO (Whatever that may be) also has nothing to do with speed.

I'd like to believe that (Really, I would). My experience tells me
otherwise. Even the '++' adds some overhead. The question to be asked
is whether that overhead is acceptable in a given situation.

>>Messages, after all, are really just function calls. Polymorphic
>>messages need be nothing more than calls to functions via function
>>pointers. In C++, this is exaclty what they are.

Look, I don't want to get into an involved discussion on OO theory.
It is something for those who don't have real lives to begin with. ;-)
I am very pragmatic about the tools I use and in many cases C++ seems
to be a good choice.

>>Whether C++ is pure or not is irrelevant.

I would disagree with you on this in one way: the ability to drop
down to a functional, imperative or low level disqualifies it as a
'pure' OOL but it is what gives it this flexibility, usefulness
and popularity.

Ross Driedger

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
de...@cs.nyu.edu (Robert Dewar) wrote:
>>Ross, you still talk about your Smalltalk *implementation* as though it
>>provided definitive answers on the efficienty of the *language* (please note
>>the title of this thread).

Granted that there are good and not so good implementations of Smalltalk.
Even the best has levels of abstraction that must be fought through to get
from source code to machine code.

All languages do that to one extent or another and in many circumstances
(language/application) the levels are not obvious to the user because the
overhead is eaten up in otherwise wasted cycles. If the relative size of
that overhead cause perceptible delays in program execution, the problem
goes beyond the implementation and enters the realm of language design.

>>It does not, it merely tells you about how efficiently your vendor has
>>implemented Smalltalk -- most interesting if you have to use that
>>implementation, but not really relevant to this thread of discussion.

Hmmm. Suppose you had a Smalltalk, Eiffel or Sather implementation that
contatined a class that would allow you to directly maipulate CPU registers.
Would you use that language to write a device driver? Or an OS kernel?
I don't think you would because the design of the language would not allow the
kind of performance expected by the users.

Now, if the subject is programmer efficency, that is a whole different ballgame.

Robert Dewar

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
Robert Martin's posts on S/T vs C++ efficiency are absolutely on target.
It is surprising how much trouble people have in distinguishing between
languages and their implementations.

The confusion leads to two kinds of error:

1. Writing non-portable programs, because the programmer does not
understand the difference between the semantics of the
language and the behavior of the compiler in use.

2. Making claims about what is good/bad in a language based on
implementation:

Ada has storage leaks
Ada takes ages to compile
Ada generates ineffient code
etc.

Crookie

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
rma...@rcmcon.com (Robert Martin) wrote:
>Crookie <g...@mfltd.co.uk> writes:
>
>>er, i thought the inherent difference in S/T (as opposed to Object COBOL
>>or C++) is that it does _everything_ by messages. even 2+2.
>
>The notion that in Smalltalk everything is objects and messages is
>often touted as the justification for calling it a pure OOPL. But the
>concept is nonsense. Methods are just functions. Messages are just
>function calls. Polymorphic message dispatch is just calling
>functions through pointers.
>
>The difference between an OO language and a non OO language is that an
>OO language provides facilities for managing polymorphic dispatch and
>for encapsulating functions and data structures.

i'm not sure i understand your response to my post...looks like you've
got something to say and you're just going to say it...

i'm simply saying that if a language implements 2+2 as a "function call"
with parameters (and is then interpreted) it is inherently slower than
a language that implements it using native (compiled) machine code.

--
Crookie
g...@mfltd.co.uk

Robert Martin

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
wjw...@kocrsv01.delcoelect.com (Bill Winn) writes:

>In article <3mcior$b...@Mars.mcs.com>, fr...@MCS.COM wrote...
><snip>
>>I'm not sure about the original poster's reasoning but 1. You can do
>>object oriented programming in both C and C++,

>Technically speaking object oriented programming requires inheritance

>(amoung other things), which C does not support, but C++ does. Therefore
>one cannot do OOP in C.

It is possible to emulate inheritance in C. (After all, many c++
compilers translate down to C.) However, the techniques difficult and
error prone.

Yes, you *can* do OO in any procedural language that supports pointers
to functions, or the equivalent. But you probably would not want to.

Robert Martin

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:

>ma...@physics2.berkeley.edu (Matt Austern) wrote:
>>>C doesn't have inheritance built in, but it is possible to do it
>>>manually. That's also true of polymorphism and dynamic type
>>>dispatching. I have seen object oriented C code.

>>>I like to make the distinction between object oriented design (a
>>>method of design that focuses on object identity and relationships
>>>rather than on algorithms), object oriented programming (a programming
>>>style, used to implement object oriented designs, that involves
>>>encapsulation, data abstraction, inheritance, and polymorphism), and
>>>object oriented languages (languages which provide specific built-in
>>>support for object oriented programming).

>I have hope for this thread, considering the last three posts.

>I never thought I would have started such a thread by commenting,
>in a very offhand way, about the 'purity' of C++ in regards to OOness.
>The important thing that Matt and Bill have pointed out is the design
>and view of the implementers that make or break a project's OO nature.

All this is true. You *can* perform OOD and then implement in a
procedural language (given that the procedural language has something
equivalent to pointer to functions.) But I would not want to do it.

The primary tools of OOD are abstract polymorphic interfaces that
break dependencies between clients and servers. In order to implement
these, we need indirection of function call (pointers to functions).
In C, we can get this by creating structures that contain a bunch of
pointers to the appropriate functions.

The point is that this style of programming is full of ad-hoc
conventions, is very error prone, and very difficult to maintain. An
OOPL like C++ takes away the clerical overhead, and the need for error
prone conventions.

Tore Joergensen

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
Ross Driedger (cs4g...@maccs.dcss.mcmaster.ca) wrote:
: rma...@rcmcon.com (Robert Martin) wrote:

: >>Being an OOPL is irrelevant to speed. Smalltalk is slow (relatively
: >>speaking) because it is interpreted, and because messages are looked
: >>up in hash tables (or the equivalent).

: Does that include when the image has been stripped of the extraneous
: classes? My experience is with the incremental compilers of the
: PC world. Is even the finished .exe considered an interpreted program?

Sometimes it is :-) It is not impossibel to pack the interpreter and
the (more or less translated) code in an exe-file. I used Visual Basic
for a project, and was very dissapointed with the speed. I made a simple
test running a loop with some calculations. Making an exe gave no
improvements in speed. I don't know if VB compiles the code, but I doubt
it.
--
______________________________________________________________________
Tore B. Joergensen, | e-mail: to...@lis.pitt.edu
a norwegian student | snail-mail: 2201 Pittockstr.
a long way from home. | Pittsburgh, 15217 PA
| web: http://www.pitt.edu/~tojst1

Guy Marentette

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
In a previous posting, Tore Joergensen (to...@lis.pitt.edu) writes:
> Ross Driedger (cs4g...@maccs.dcss.mcmaster.ca) wrote:
> : rma...@rcmcon.com (Robert Martin) wrote:
>
> : >>Being an OOPL is irrelevant to speed. Smalltalk is slow (relatively
> : >>speaking) because it is interpreted, and because messages are looked
> : >>up in hash tables (or the equivalent).
>
> : Does that include when the image has been stripped of the extraneous
> : classes? My experience is with the incremental compilers of the
> : PC world. Is even the finished .exe considered an interpreted program?
>

Digitalk's and ParkPlace's (and possibly other vendor's) latest versions
of Smalltalk no longer interpret the method code. The first time a method
is invoked, its byte code is compiled to machine code for execution. The
machine code is cached and used in subsequent invokations.


--
---------------------------------------------
Guy Marentette
aa...@Freenet.carleton.ca
Ottawa, Canada

Robert Dewar

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
Fine Ross, if you don't want to get into a theoretical discussion of
language efficiency that's perfectly reasonable, but they why post under
this thread, whose original purpose, now unfortunately somewhat blurred
by confused posts, is precisely to discuss the theoretical issue of
language efficiency, as opposed to implementation efficiency. In other
words we are interested not in what your compiler or any one else's
compiler currently may or may not do, but rather whether languages
have fundamental differences that limit what can be achieved in
*any* implementation. The (rather bogus) comments about OOL purity are
also irrelevant to this discussion.


Robert Dewar

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
"I don't think you would because the design of the language would not allow the
kind of performance expected by the users."

You keep making unsubstantiated claims like this, and when challenged to
back up these claims, you simply quote observations about the behavior of
your particular compiler.

What is an example of an aspect of "the design of the language" that
fudnamentally restricts performance. Please don't tell me any more about
what your compiler can or cannot do. Instead quote a particular feature
and explain at a theoretical level why you think that it restrics the
maximum achievable performance.


Keith Thompson

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
Richard Ladd Kirkham (gt2...@prism.gatech.edu) wrote:
: In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
: >
: > C++ . . . is not an OOL in the pure sense)

: Why?

I suspect what the original poster meant is that C++ is not a *pure*
OOL because not all types are classes (in the sense that C++ uses the
term "class"). For example, you can't declare a type^H^H^H^Hclass
inherited from int or float, whereas you can inherit from anything
explicitly declared as a class. This is a result of C++'s C heritage.
Ada 95 is slightly "purer" in this sense, in that you can derive from
the predefined types.

In a pure OOL, even the predefined types would be classes derived from
some (probably amorphous) base class with a name like Any. I think
Smalltalk and Eiffel are like this.

Or I'm completely wrong.

Whether this kind of purity is good, bad, or even relevant is another
issue, preferably for another newsgroup.

--
Keith Thompson (The_Other_Keith) k...@thomsoft.com (k...@alsys.com still works)
TeleSoft^H^H^H^H^H^H^H^H Alsys^H^H^H^H^H Thomson Software Products
10251 Vista Sorrento Parkway, Suite 300, San Diego, CA, USA, 92121-2718
That's Keith Thompson *with* a 'p', Thomson Software Products *without* a 'p'.

Stephane Barbey

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <3mg45s$5...@disunms.epfl.ch>, mato...@di.epfl.ch (Fernando Mato Mira) writes:
:
: PS: Dewar, you said `old peculiar terminology'. I don't

: know what's up with USENET, but it often happens that
: a subject will appear just a couple of days I start meditating
: about some issue. I couldn't determine if I had heard
: this `subject-oriented' term before. Got any references?
: Thanks.
:

I'm not Dewar, but you could take a look at

William Harrison and Harold Ossher, "Subject-oriented Programming (A
critique of pure objects)", in the proceedings of OOPSLA '94, p.
411-428.


-Stephane


--
Stephane Barbey "It's not easy getting real wings"
INJ-335, bar...@di.epfl.ch

Scot Dyer

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
mato...@di.epfl.ch (Fernando Mato Mira) writes:

>In article <dewar.797608300@gnat>, de...@cs.nyu.edu (Robert Dewar) writes:
>> "Good point. Purity is irrelevant. However, I would be tempted
>> to call all single-dispatch languages `subject-oriented', and
>> only those with multi-methods `object-oriented'."
>>
>> You can be tempted (and indeed go ahead and use) any old peculiar
>> terminology you want, but arguing about terminology is totally
>> irrelevant.
>>

>> [...]

>I didn't mean to imply that single-dispatch languages are not good.
>The terminology might be `peculiar' but would probably be more exact
>(too late to change, I guess).

You know, I've seen quite a bit of this argument about single vs. multiple
dispatch, but I've vary rarely seen an example of what multiple dispatch can
do right that single dispatch can't do right. I would like to offer such
an example.

Suppose I have a class hierarchy rooted at the class 'Shape'. Shapes will
include rectangles, ovals, blocks of text, basically anything that has a
graphical representation. Suppose I also have a class hierarchy rooted
at the class 'Device' which contains various graphical output devices.

Question: Using single dispatch, how does one code a subroutine to display
a given shape on a given device?

Answer: Well, you have to pick either the shape object or the device object
to control dispatch. The methods, then, will contain big cases around the
type of the other object supplied. For example, each device could act
differently according to the shape given it (via some ugly case or switch type
construction), or each shape could act differently according to the device
given it (via another ugly case or switch type construction).

Problem: Say I want to add a new device and a new shape. What code do I
have to touch? If I let the device do the dispatch, then I have to touch
every device for each new shape. If I let the shape do the dispatch, then
I have to touch every shape for every new device. Ug! Isn't this what OO
was supposed to solve?

Salution: In a language with multiple dispatch, one may write a function
display( shape s, device d ) { ... } which does the trick. Then when you
add a shape you need to add display() methods for it and every device, and
when you add a device you need to add display() methods for every shape.

--
swd@{cse|ramoth}.unl.edu
What the large print giveth, the small print taketh away: Any resemblence
between the above views and those of my employer, terminal, institution or
self is purely coincidental. If unsatisfied, return the unused portion.


Fernando Mato Mira

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <3mjc8c$6...@crcnis3.unl.edu>, s...@cse.unl.edu (Scot Dyer) writes:

> You know, I've seen quite a bit of this argument about single vs. multiple
> dispatch, but I've vary rarely seen an example of what multiple dispatch can
> do right that single dispatch can't do right. I would like to offer such
> an example.
>
> Suppose I have a class hierarchy rooted at the class 'Shape'. Shapes will
> include rectangles, ovals, blocks of text, basically anything that has a
> graphical representation. Suppose I also have a class hierarchy rooted
> at the class 'Device' which contains various graphical output devices.

Yes. Some people here wrote a 3D graphics toolkit in Eiffel
(and with a very good design), where you you get a bit lost with all
the cascaded dispatch among renderers, shapes, views, etc.

When I set out to make my own system in CLOS, the ability to do
multiple dispatch gave me not only code which is much more simple to follow
and extend, but it even made whole subhierarchies dissapear completely!

But, I still miss the absence in CLOS of `Predicate Classes'
(like in Cecil, surf through my `tech page' and `fave report severs' for .ps).
This would allow for using object-oriented techniques more thoroughly
(defining 2**N classes for to represent the different attribute
combinations in a `view' (texture, antialiasing, transparency, etc) would
be crazy).

Fernando Mato Mira

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <1995Apr1...@di.epfl.ch>, Stephan...@di.epfl.ch (Stephane Barbey) writes:
> In article <3mg45s$5...@disunms.epfl.ch>, mato...@di.epfl.ch (Fernando Mato Mira) writes:
> :
> : PS: Dewar, you said `old peculiar terminology'. I don't
> : know what's up with USENET, but it often happens that
> : a subject will appear just a couple of days I start meditating
> : about some issue. I couldn't determine if I had heard
> : this `subject-oriented' term before. Got any references?
> : Thanks.
> :
>
> I'm not Dewar, but you could take a look at
>
> William Harrison and Harold Ossher, "Subject-oriented Programming (A
> critique of pure objects)", in the proceedings of OOPSLA '94, p.
> 411-428.

Yes. I remember giving a brief look at this paper but dismissing
his usage of the term `subject-oriented' because a user is just another
object, a special kind of context, and you can `avoid concentraing
on the object' as he advocates just by staying object-oriented (albeit,
with multiple dispatch, that is!).

Ken Leidner

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <dewar.797729041@gnat>, de...@cs.nyu.edu says...
>

>You keep making unsubstantiated claims like this, and when challenged to
>back up these claims, you simply quote observations about the behavior of
>your particular compiler.
>
>What is an example of an aspect of "the design of the language" that
>fudnamentally restricts performance. Please don't tell me any more about
>what your compiler can or cannot do. Instead quote a particular feature
>and explain at a theoretical level why you think that it restrics the
>maximum achievable performance.
>

An example would have to be based on the "language" not an add on.

So how about the fact the FORTRAN allows more dimensions than COBOL.
Writing a program in each language would by the nature of the language force
the programmer to handle the problem of changing how to represent the data
that is defined in say 17 dimensions. (No I don't know of any such problem)
This would force a difference, no matter how the compilers deal with the
program, that would force a restriction on the performance of the COBOL
program.

Or IBM's PLS that only supports in the language interger math has the same
problem when asked to deal with taking the square root of any number.

The place to look on language efficiency is at missing data types or process
statements that a given problem can use to define the problem. When missing
then that language must not be as efficient. That assuming I get the "best"
compiler for each language. Now the hard part is knowing if the language
feature is best to express the problem in.

We all know that under the covers the same machine language is at work, so
the question is which language, for a given problem, can be used to best
express the problem. For more problems then we would like to think (we all
do have our favorite language) many languages can express the problem
equally well.

So to me - the question is - what language features does this problem need
in a programming language? And I hope that the compiler does a good job and
that I know the language!

As I said before. We have difference language for a reason. When that
reason is known, we have an idea of what kind of problems the language might
be the most efficienct at. Well some times.


Ross Driedger

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
de...@cs.nyu.edu (Robert Dewar) wrote:
>>"I don't think you would because the design of the language would not allow the
>>kind of performance expected by the users."
>>
>>You keep making unsubstantiated claims like this, and when challenged to
>>back up these claims, you simply quote observations about the behavior of
>>your particular compiler.

The least you could do is attribute the quote to me.

After your last post I was pretty sure that all your contributions were nothing
but an extended troll.

Now I'm sure. Too bad. The other contributors were saying worth while things.

So, when you write your efficient OS in Eiffel (or Smalltalk or whatever overhead
language you like) drop me an email.

In the meantime, ciau....

Robert Martin

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
Curtis Bass <cbass%intmed...@mhost.utmb.edu> writes:

>gt2...@prism.gatech.edu (Richard Ladd Kirkham) wrote:
>>
>> In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
>> >
>> > C++ . . . is not an OOL in the pure sense)
>>
>> Why?

>Well, C++ does directly support all the mechanisms necessary


>for object-orientation (encapsulation, abstraction, polymorphism,
>and inheritance), but it still allows you to write procedural
>(i.e. structured) code without requiring an object-oriented
>design or implementation. In a "pure" OOPL, such as Smalltalk,
>EVERYTHING is an object, and the object-oriented paradigm is
>the ONLY one supported -- you cannot write strictly procedural
>code in a 'pure" OOPL.

Would that 'twere so. Actually, all OOPLs are conducive to the
implementation of structured programs. This must be so. Consider
that a structure program is an amalgam of data structures and
functions. In an OOPL we could create a single object which contains
all these data structures and functions. Voila, a structured program
implemented in a OOPL.

You might think that nobody would ever *do* such a thing, but in fact
it is fairly common practice for people to convert procedural legacy
code into OOPLs by doing exactly this. First they get the design into
an OOPL, and then they migrate it into an OOD.

So, what then is a "pure" OOPL. My contention is that the term has no
useful definition.

>It's sort of like BASIC -- you CAN write structured code in
>most modern BASIC dialects, using procedures and parameter-
>passing mechanisms with local variables, but you can also
>write old-fashioned, unstructured spaghetti BASIC, using
>GOTO's, GOSUBS, line numbers, and all global variables. So,
>we can say that BASIC is not a "structured" language in the
>"pure" sense.

Yes, just like C is not an OO language in the "pure" sense. But that
was not the argument you were making. You were making the opposite
argument. By analogy you were saying that a "pure" structured
language would not allow the implementation of unstructured programs.

-------

I invest a lot of time in these arguments. And that is strange
because I consider them to be almost frivolous. However the reason I
spend the time on them is that they are also "seductive". Not that
people who use the term "pure" are being dishonest, or sneaky, but
that the term "pure" is very emotive. Purity is most often equated
with "goodness" or "simplicity". And these implications are not
appropriate for describing the OO-ness of a language.

I remember talking to a young man in High School a few years ago. I
told him that I did a lot of work in C++. He told me that he was not
interested in learning C++ because it was not a "pure" object oriented
language.

Because of that event, whenever anybody talks about the purity of an
OOPL, this young man's face is conjured in my mind, and I feel the
need to prevent someone from choosing or rejecting a language based
upon a such a misguiding attribute.

Henry Baker

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <dewar.797469506@gnat>, de...@cs.nyu.edu (Robert Dewar) wrote:

> Kennel says:
>
> "Different assumptions built into one language vs. another make some
> implementation strategies that are simple and easy in one language either
> uneconomically difficult {usually} or outright impossible."
>
> This isn't substantiation, it is just vague expression of opinion with
> no substance. Can you give SPECIFIC examples of what you mean. While I
> can think of marginal instances of the above principle, I think as a
> general statement of the situation it is completely WRONG!
>
> I obviously can't give examples of the absence of a phenomenon, so really
> you need to give some examples of its presence!

I've worried about this problem for a long time.

Yes, it is real. For example, (old) Fortran assumed static allocation
of variables, while nearly everyone else had stack-allocation of variables,
at least. Due to the knowledge that an address is constant at run-time
leads to some (usually minor) optimizations that would be difficult to
statically discover. Examples related to this one are arrays whose
size is fixed at compile time, and whose bounds checking is therefore
somewhat simplified. It is true that nothing keeps more sophisticated
languages from declaring fixed-size arrays, but some sophisticated languages
do not provide such a declaration, and cannot even in principle perform
such an optimization.

Another problem is the profitability of certain optimizations for certain
languages & applications. There is no reason why many of the sophisticated
array-hacking optimizations found in modern Fortran compilers can't be
done for other languages like Lisp or Ada. Unfortunately, these optimizations
are quite expensive, but worse still, they are quite application-specific.
Therefore, unless the users of your compilers are compiling this type of
application, your expensive optimization will go to waste.

Therefore, natural selection/evolution has caused array-hacking/numeric
applications to go to Fortran, and Fortran compiler vendors have responded
with optimizations suitable for them. Vendors of -- e.g., Ada -- compilers
could do most of the same optimizations, but even if they did, it is highly
unlikely that that user community would convert to Ada, so those optimizations
would be wasted.

Therefore, although languages may _start_ nearly equivalent in power &
optimizability, their evolution along with their user communities quickly
tune the languages in such a way that they are no longer equivalent.

--
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html

Kenneth Almquist

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
Robb....@di.epfl.ch (Robb Nebbe) writes:
> In the context of separate compilation the lack of type information in
> Smalltalk creates overhead that you don't have enough information to
> get rid of. Obviously if you have the entire program then, at least
> theoretically, you could produce the same code since you could propagate
> type information (after having infered it) but I would expect that to
> be far from trivial.

The Icon compiler from the University of Arizona does global type
inference as part of translating Icon to C. The type inference is
certainly not "trivial," but the compiler as a whole is relatively
simple.

(For those not familiar with Icon, it is the successor to Snobol.
Variable declarations do not specify the types of the variables.)
Kenneth Almquist

Ell

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In <1995Apr13....@rcmcon.com> rma...@rcmcon.com writes:

> Curtis Bass <cbass%intmed...@mhost.utmb.edu> writes:
>
> >gt2...@prism.gatech.edu (Richard Ladd Kirkham) wrote:
> >>
> >> In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
> >> >
> >> > C++ . . . is not an OOL in the pure sense)
> >>
> >> Why?
>
> >Well, C++ does directly support all the mechanisms necessary
> >for object-orientation (encapsulation, abstraction, polymorphism,
> >and inheritance), but it still allows you to write procedural
> >(i.e. structured) code without requiring an object-oriented
> >design or implementation. In a "pure" OOPL, such as Smalltalk,
> >EVERYTHING is an object, and the object-oriented paradigm is
> >the ONLY one supported -- you cannot write strictly procedural
> >code in a 'pure" OOPL.
>
> Would that 'twere so. Actually, all OOPLs are conducive to the
> implementation of structured programs. This must be so. Consider
> that a structure program is an amalgam of data structures and
> functions. In an OOPL we could create a single object which contains
> all these data structures and functions. Voila, a structured program
> implemented in a OOPL.

I understand "structured programming" to mean well abstracted, or similarly
well modularized as Djkstra says (with little use of 'goto'. So that
everything in one object would be contrary to structured programming.

Elliott


Robert Dewar

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
Fortran does not and never has, required static allocation for subroutine
variables. This is a common myth, but in the original Fortran standard
(we are talking about languages here, so I assume we are talking standards!)
Fortran-66, the semantics was very carefully setup to allow either static
allocation, or stack allocation.

For example, if you initialize a local variable in Fortran using DATA, then
modify it, the semantics are that next time in, the value is undefined (yes,
surprisingly, the common idiom of maintaining a "first time in" variable
that is initialized by DATA is completely wrong!)

It is possible that this has been modified in more recent Fortran standards,
my language lawyer 100% understainding of Fortran only made it part way into
the 70's.

Interestingly, on most modern machines, global addressing is much more
expensive than stack addressing, because on RISC machines it takes an
extra instruction to build a direct address. Furthermore, such direct
addressing is not permitted in many environments, because of shared data
segment considerations, and you are forced to go indirect through a pointer
(e.g. the TOC in an IBM AIX environment)

Finally, I see nothing that stops you using static variables for Ada for
non-recursive procedures if that speeds things up (as above, it may not!)

The point about optimizations not being economical for given implementation
environments is perfectly valid, but has nothing to do with langugae
effici9enty, only with the efficiency of implementations. Let's not blur
this distinction.

it is one thing for someone to say "Hey, my Ada program runs slower than
Fortran". It is quite another for someone to say "That's because Ada is
fundamentally less efficient than Fortran at the language level", when
the proper answer is "your implementation is deficient, but you cannot
blame this on the language".

P.S. the point about stacks and Fortran variables is not just theoretical,
the B-5500 Fortran compiler used stack allocation for its variables.


Robert Dewar

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
First: Fortran does not permit more dimensions than COBOL (I guess you can
find an early version of COBOL that does not allow as many dimensions as
the latest Fortran, but that's an invalid comparison, after all Fortran-2
only allowed 3 dimensions). You should compare current versions, and your
statement is false for the current versions.

Second: even if true, so what? If you need a 10 dimensional array and
you are coding in a language with only 3 dimensions, you have to do your
own linearization. But a typicaly compiler linearizes anyway and then
optimizes the linearization, so you will get identical machine code, and
even this difference would not be reflected in language efficiency, only
in convenience.

So that example bights the dust.

IBM PLS is far too obscure for me to know anything about, but I would be
surprised if it does not support floating-point, is this really true?
As far as I know, PLS is still proprietary???

Anyway, how about confining examples to well-known languages so that
enough people can judge?

Still waiting for a convincing example of an instance in which differences
in language semantics or syntax cause actual real-world programs to have
fundamentally different levels of performance because of differences in
the lanbguage and not just hthe implementation ....


Norman H. Cohen

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <3mgnkc$e...@atlantis.utmb.edu>, Curtis Bass
<cbass%intmed...@mhost.utmb.edu> writes:

|> In a "pure" OOPL, such as Smalltalk,
|> EVERYTHING is an object, and the object-oriented paradigm is
|> the ONLY one supported -- you cannot write strictly procedural
|> code in a 'pure" OOPL.

Nonsense. It is easy to do a straightforward transliteration of a
Fortran IV program into Smalltalk, for example.

--
Norman H. Cohen nco...@watson.ibm.com

Matt Austern

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <1995Apr13....@rcmcon.com> rma...@rcmcon.com (Robert Martin) writes:

> So, what then is a "pure" OOPL. My contention is that the term has no
> useful definition.

First, I agree that most people don't have a very clear definition in
mind when they talk about "pure" object-oriented languages. ("Pure"
and "powerful" have to be the two least informative words in all of
computer science!) I think there are useful distinctions to be made
between different object-oriented languages, though, and you could
even use the word "pure" for one category if you wanted to.

One useful distinction is between languages where everything is an
object and languages where that's not the case. Are integers and
characters object, for example? Are classes objects? Are operating
system resources objects? Are processes and threads objects?

This is a continuum, of course, not a binary distinction. (What's
"everything"?) Still, I think it's one useful way to think about
languages.
--

--matt

Robert Dewar

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
OK, I guess Ross Driedger declines to give a concrete example, or cannot. So
we still do not have a clear technical example of a feature in a language
that fundamentally causes inefficient code to be generated. Still looking?


Ell

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In <MATT.95Ap...@physics2.berkeley.edu> ma...@physics2.berkeley.edu writes:

Yes I think it is also. Even strongly typed languages can have everything
as an object.

Elliott


Patrick C. Beard

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In article <3mjc8c$6...@crcnis3.unl.edu> Scot Dyer, s...@cse.unl.edu writes:
>Question: Using single dispatch, how does one code a subroutine to display
>a given shape on a given device?
>
>Answer: Well, you have to pick either the shape object or the device object
>to control dispatch. The methods, then, will contain big cases around the
>type of the other object supplied. For example, each device could act
>differently according to the shape given it (via some ugly case or switch type
>construction), or each shape could act differently according to the device
>given it (via another ugly case or switch type construction).

I would argue that this switching on the "type" of the Shape object is not
necessary. I have built a graphical system using single dispatch in C++,
and it isn't as bad as you might think.

>Problem: Say I want to add a new device and a new shape. What code do I
>have to touch? If I let the device do the dispatch, then I have to touch
>every device for each new shape. If I let the shape do the dispatch, then
>I have to touch every shape for every new device. Ug! Isn't this what OO
>was supposed to solve?

The solution is to provide a sufficiently rich set of graphics primitives
that all devices can provide in an abstract way. The problem I was trying
to solve was to make on-screen drawing and printing work in the same way.

If Shape objects are required to display themselves on a Device, as in:

class Device {
public:
virtual void Line(Point& p1, Point& p2);
// basic graphic primitives, etc...
};

class Shape {
public:
virtual void Display(Device& onDevice);
};

then any given shape can render itself using the set of primitive operations
provided by the Device, and new Shapes can be added withouth modifying any
devices. If a new Device operation is needed, then the Device(s) will have to be
modified, but with careful use of inheritance, it is often possible to synthesize
the new operations from existing ones, and add them to the abstract Device class.

>Salution: In a language with multiple dispatch, one may write a function
>display( shape s, device d ) { ... } which does the trick. Then when you
>add a shape you need to add display() methods for it and every device, and
>when you add a device you need to add display() methods for every shape.

Still, if there are n devices, then whenever I add a new Shape, I'd have to
write n display methods. I'd rather only have to write 1 new method that
works on all devices. This is how my system works.

Here's a toy example. Assume we have a device that only provides line
primitives. Lines can be used for lots of things.

class Line {
public:
virtual void Display(Device& onDevice) { onDevice.Line(itsP1, itsP2); }
};

class Rectangle {
public:
virtual void Display(Device& onDevice) {
onDevice.Line(itsTopLeft, itsTopRight);
onDevice.Line(itsTopRight, itsBottomRight);
onDevice.Line(itsBottomRight, itsBottomLeft);
onDevice.Line(itsBottomLeft, itsTopLeft);
}
};

If Devices are revised to provide a better Rectangle primitive, I can
start using it, but I don't have to.

// author: Patrick C. Beard
// organization: Computing Research Group
// e-mail: be...@cs.ucdavis.edu

Kenneth Almquist

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
s...@cse.unl.edu (Scot Dyer) writes:
> You know, I've seen quite a bit of this argument about single vs. multiple
> dispatch, but I've vary rarely seen an example of what multiple dispatch can
> do right that single dispatch can't do right.

A real example to discuss. Excellent!

> Question: Using single dispatch, how does one code a subroutine to display
> a given shape on a given device?
>

> [Discussion deleted]
>
> Solution: In a language with multiple dispatch, one may write a function


> display( shape s, device d ) { ... } which does the trick. Then when you
> add a shape you need to add display() methods for it and every device, and
> when you add a device you need to add display() methods for every shape.

Let's suppose that the only requests for added features for this software
will be requests for new shapes or new devices. Then we can define:

s = the number of shapes supported
d = the number of devices supported
f = s + d = the total number of features supported

If we expect feature requests asking for additional shapes and feature
requests asking for additional devices to both arrive at a reasonable
rate, it seems reasonable to assume that s = O(d), or equivalently that
d = O(s). Since the number of display methods required is d * s, the
cost of implementing f features is O(f^2). Over time it will become
increasingly difficult to add new features, until eventually the cost
of supporting additional features will become so huge that adding any
new features will become effectively impossible.


Let me map out an alternative design. To begin with, I assume that
each device has a set of drawing operations that it accepts. Presum-
ably all devices permit drawing individual pixels, but higher level
operations are likely to be much faster and therefore should be used
if available. So this design has three sets which are expected to grow
over time: the set of shapes, the set of devices, and the set of
drawing operations.

Associated with each device is a list of supported drawing operations
and the implementations of the supported operations. Associated with
each shape is a list of display implementations. Associated with each
display implementation is a list of drawing operations used by the
implementation. To display a particular shape on a particular device,
you:
1) Get the list of operations supported by the device.
2) Select the first display implementation for the shape that
uses only those operations, and
3) Call the display implementation.

Display implementations are ordered by the programmer so that if there
more than one implementation could be selected by step 2, the one using
the fewest drawing operation is selected.

With this design you could still see O(f^2) display implementations,
but this is unlikely, for several reasons. First, the larger the set
of drawing operations gets, the less likely it is that a new device will
implement drawing operations which are not already in the set. There-
fore, the number of drawing operations is not likely to be proportional
to the number of devices. Second, once a fair number of operations are
supported, it is likely that any new operations will be specialized
operations which are only useful with a handful of shapes. Thus the
cost of supporting a new operation may not be proportional to the
number of shapes. Finally, it is important to note that this design
allows you to make trade offs between development effort and performance.
For example, if you add a device which supports drawing parabola segments,
it is not necessary to write a circle display routine which uses this
drawing operation before you can start using the device. Instead, you
can just allow some existing circle drawing routine (which uses line
segments, for example) to do the job. So if the cost of adding new
features starts to increase, you may be able to take advantage of
increasing hardware speeds to convince your customers to accept less
efficient implementations.

This design still uses multiple dispatch for the display routine since
which display routine is called depends upon both the shape and the
device. (Inside the display routine, single dispatch is used to call
the drawing operations.) The difficulty is that the algorithm used to
select the display routine is application specific. It is also subject
to change; it might prove necessary at some point to specify costs for
each drawing operation and select the minimal cost display operation.
As far as I have been able to figure out, this is true in most cases
where multiple dispatch would be useful.

Does CLOS provide any way of doing this sort of thing? I suppose that
in LISP it wouldn't be too unreasonable to write code to generate the
dispatch table at compile time.
Kenneth Almquist

m...@tivoli.mv.com

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to

This has at times been a pretty interesting thread, but we need to
give Robert Dewar some straw men from time to time to keep it alive.
Here's a couple to start.

Perform thru. A construct that in general permits starting an
execution sequence from several points in a program which begins at
any paragraph and ends at any other, passing through other perform
exits, which may or may not be set, makes computation of a control
flow graph a formidable task. And if that can be done, the resulting
graph may well contain a very large number of paths which, in
reality, can never be executed. Difficulty in modeling the behavior
of the object program must certainly impede optimization.

Recursion. The absence of recursion in the current COBOL standard
means that local static variables may safely be kept in registers
and common subexpression involving local statics can be computed
across both internal and external calls. When recursion is permitted
it is necessary to construct a call graph over the entire linked
executable to determine when these practices are safe.

I'll submit others as they occur to me.

-- Mike McKernan

Ell

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In <3mjvq0$r...@nova.umuc.edu> COA...@EUROPA.UMUC.EDU writes:

> In <1995Apr13....@rcmcon.com> rma...@rcmcon.com writes:
> > Curtis Bass <cbass%intmed...@mhost.utmb.edu> writes:
> > >gt2...@prism.gatech.edu (Richard Ladd Kirkham) wrote:
> > >> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
> > >> > C++ . . . is not an OOL in the pure sense)

> > >> Why?

> > >Well, C++ does directly support all the mechanisms necessary
> > >for object-orientation (encapsulation, abstraction, polymorphism,
> > >and inheritance), but it still allows you to write procedural
> > >(i.e. structured) code without requiring an object-oriented

> > >design or implementation. In a "pure" OOPL, such as Smalltalk,


> > >EVERYTHING is an object, and the object-oriented paradigm is
> > >the ONLY one supported -- you cannot write strictly procedural
> > >code in a 'pure" OOPL.

> > Would that 'twere so. Actually, all OOPLs are conducive to the


> > implementation of structured programs. This must be so. Consider
> > that a structure program is an amalgam of data structures and
> > functions. In an OOPL we could create a single object which contains
> > all these data structures and functions. Voila, a structured program
> > implemented in a OOPL.

> I understand "structured programming" to mean well abstracted, or similarly
> well modularized as Djkstra says (with little use of 'goto'. So that
> everything in one object would be contrary to structured programming.

Though I do get your point that one can write non-OO programs in an OOPL,
even in an OOPL where everything is an object.

Elliott


John Max Skaller

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In article <dewar.797791901@gnat>, Robert Dewar <de...@cs.nyu.edu> wrote:
>OK, I guess Ross Driedger declines to give a concrete example, or cannot. So
>we still do not have a clear technical example of a feature in a language
>that fundamentally causes inefficient code to be generated. Still looking?
>

I have no idea what the original post was about.
But here is a "fundamental feature" of ISO C that causes grossly
horrible inefficient code to be written by programmer seeking
portability:

There is no assurance in ISO C that there exist two (unsigned)
integral types which are different sizes, let alone one which is
required to support a range at least the square of the other.
Indeed, there is not even a guarrantee the number of bits in a type
will assuredly be even.

As a result it is almost impossible to write
efficient multiple precision arithmetic routines in ISO C.
Just try a simple unsigned add with carry.

Of course most hardware supports such operations
directly with specific flags and instructions. Only an
extremely smart compiler could unravel my portable
multi-precision arithmetic routines (published
in "Softare Solutions in C") and generate optimal
machine code.

If I could rely on a long being just one more bit
than a short, much faster multi-precision add routines
could be coded.

Here is the macro I use without sloshes:

#define add_carry(result,x,y,carry)
{
register word temp = (y) + carry;
result = (x) + temp;
carry = (x) > result || (y) && !temp;
}

This is not quite as good as:

void add_carry(unsigned short result, unsigned short x,
unsigned short y, unsigned short carry)
{
unsigned long temp = x + y + carry;
carry = temp & (unsigned short)(-1);
result = temp;
}

which requires at least 1 more bit in a long than a short.
And that is no where near as efficient as:

ADC AX,BX

which is one 8086 machine instruction although it is marginally
possible a smart compiler might detect that "carry" only
ever held a carry bit from an addition operation.

In fact, ISO C rules which "allow implementations to be optimal"
sometimes seriously degrade the performance of meticulously
portable code.

--
JOHN (MAX) SKALLER, INTERNET:max...@suphys.physics.su.oz.au
Maxtal Pty Ltd,
81A Glebe Point Rd, GLEBE Mem: SA IT/9/22,SC22/WG21
NSW 2037, AUSTRALIA Phone: 61-2-566-2189

Robert I. Eachus

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In article <dewar.797791901@gnat> de...@cs.nyu.edu (Robert Dewar) writes:

> OK, I guess Ross Driedger declines to give a concrete example, or
> cannot. So we still do not have a clear technical example of a
> feature in a language that fundamentally causes inefficient code to
> be generated. Still looking?

First I'll do a try at this... In APL, integer arithmetic does not
overflow, it just returns a floating point result. This means that
even where APL compilers have been written, it is very difficult (but
not impossible in all cases) to make assumptions about the tag of the
result.

Good APL implementations will come out at about 3x the execution
time of other procedural languages on straight-line integer code. (Of
course matrix code can be very competitive--APL provides very good
information to the optimizer.)

Now I'll go the exact opposite direction. I'll leave the names out
because there is a slight embarassment factor, but not a big one.

Almost ten years ago a key compiler developer at a mini-
supercomputer company called me to get clarifications about the need
for temporaries in vector operations. The vendor was trying to
validate their Ada compiler, and while adding the temporaries which
seemed to be required by certain ACVC tests was not a lot of compiler
work, doing it intelligently--so that the temporaries were only
present when necessary--was going to be a major effort.

I explained that the difference between Ada and Fortran in this
case was in the definition of erroneousness--in Fortran the rule for
when you could optimize was much simpler, but sometimes allowed for
unexpected "junk" results.

All of a sudden I had a horrible suspicion, and asked a dozen or so
questions. My final answer was to do the optimizations, because they
were needed for Fortran as well! If user specified a stride of 2 OR
if they used the Fortran COMPLEX type, the compiler would generate
junk results where the results were required to be correct in both
languages. I gave the developer a short fragment illustrating the
problem and was told that "of course" their Fortran compiler got it
right. All I could say was try it and see.

Of course, I got a call back later that day--the Fortran compiler
(or if you prefer, the hardware) really did produce junk results.
Three weeks later I got another call. Installing the fix had a
significant performance impact on the Fortran LINPACK benchmark, but
they were defending the change to management. It seemed that the bug
did occur in their LINPACK code, but the benchmark didn't check
answers.

So, often, these "differences" between programming languages are
really differences in the level of testing. You find out that
"obvious" optimizations require an awful lot of checking before you
can do them. (My favorite bad example of this is moving a common
subexpression out of a loop. You either have to prove that the code
can't cause a failure, or insure that it is only evaluated if the loop
itself will be executed at least once.)

Anyone who has maintained a siginificant compiler optimizer knows
that to maintain performance you have to devote about half your
maintenance effort to new optimizations because the markedroids won't
allow you to ship a compiler revision that is significantly slower.
--

Robert I. Eachus

with Standard_Disclaimer;
use Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...

Hans Boehm

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
de...@cs.nyu.edu (Robert Dewar) writes:

>Still waiting for a convincing example of an instance in which differences
>in language semantics or syntax cause actual real-world programs to have
>fundamentally different levels of performance because of differences in

>the lanbguage and not just the implementation ....

It's not hard to find differences that are sufficiently difficult to
compensate for in the optimizer that real optimizers don't do so reliably.
It's hard to argue that these couldn't possibly be overcome someday.

1) C's unconstrained pointers in practice seem to imply some performance
loss relative to Fortran for numerical code. Determining possible aliasing
patterns in C is hard and most real compilers seem to punt completely.
(I know there are many research papers describing how to do better ...)
This is especially important for parallelizing compilers.

2) In spite of lots of research on type inference, in reality Lisp-like
dynamic typing still seems to imply some tagging overhead. Part of the
problem here is presumably that separate compilation issues limit real
optimizations. On-the-fly compilation techniques can also help, but aren't
always applicable.

3) (Minor, but annoying to some of us) It's hard to get around the omission
of some construct that's efficiently implemented by the hardware. The
canonical example is integer multiplication producing a double-width
result. (You can recognize and optimize some forms of expressing this,
but not all forms. In the case of the above example, most C implementations
now provide a solution, but only by extending the language.)

4) If you consider the expected presence or absence of garbage
collection to be a property of the language, then the absence of garbage
collection can require a concurrent program to perform much more
synchronization, slowing it down substantially. Conversely, programs that
allocate large uninitialized arrays, and never write to most of the array,
are essentially guaranteed to be slower with garbage collection
(assuming the language provides no explicit deallocation primitives).

probably man others ...

Hans-J. Boehm
(bo...@parc.xerox.com)
Standard disclaimer ...

Trent Glascock

unread,
Apr 14, 1995, 3:00:00 AM4/14/95
to
In article <3mcl5d$a...@acmez.gatech.edu>, gt2...@prism.gatech.edu says...
>(Lawrence Free/ A.F. Software Services) writes:
>>I'm not sure about the original poster's reasoning but 1. You can do
>>object oriented programming in both C and C++, and 2. You can avoid doing
>>object oriented programming in both C and C++. I have not dealt with a
>>language that forces the issue (there may be some). You can use object
>>oriented techniques in most computer languages. In COBOL it's efficient,
>>in C it's recommended, in assembler, we call it survival.
>
>Now I'm more confused. If there's really a difference between OOP and
>mere structural, then there are damned few object oriented techniques
>that can be used in "most computer languages". To mention just one
>example, does COBOL have generic functions, what C++ calls templates?
>Does plain C?

Generics are not an object oriented feature. The generally accepted
definition of an object oriented programming language is one that
supports encapsulation, inheritance and polymorphism. C++ and Smalltalk
have these as features of the language. The previous poster is correct
in saying that you can structure a C (or COBOL) program to take advantage
of these techniques. In other words, you can implement an object oriented
design in a language that doesn't have object oriented features.

--
Trent Glascock
glas...@esd.dl.nec.com

Speaking only for myself


Ian Joyner

unread,
Apr 15, 1995, 3:00:00 AM4/15/95
to
de...@cs.nyu.edu (Robert Dewar) writes:

>P.S. the point about stacks and Fortran variables is not just theoretical,
>the B-5500 Fortran compiler used stack allocation for its variables.

^^^^
The B-5500 FORTRAN compiler *still uses* stack allocation on the A Series.
And yes you can write recursive FORTRAN programs.
--
Ian Joyner |"for when lenity and cruelty play |All opinions are
Unisys (ACUS) | for a kingdom, the gentler gamester|personal and are not
i...@syacus.acus.oz.au| is the soonest winner" W.S. Henry V|Unisys official comment

Herb Sutter

unread,
Apr 15, 1995, 3:00:00 AM4/15/95
to
In article <3mjvq0$r...@nova.umuc.edu>, COA...@EUROPA.UMUC.EDU (Ell) wrote:

>In <1995Apr13....@rcmcon.com> rma...@rcmcon.com writes:
>>
>> Would that 'twere so. Actually, all OOPLs are conducive to the
>> implementation of structured programs. This must be so. Consider
>> that a structure program is an amalgam of data structures and
>> functions. In an OOPL we could create a single object which contains
>> all these data structures and functions. Voila, a structured program
>> implemented in a OOPL.
>
>I understand "structured programming" to mean well abstracted, or similarly
>well modularized as Djkstra says (with little use of 'goto'. So that
>everything in one object would be contrary to structured programming.

Actually, it would be contrary to sound OO programming. You can still
abstract within a single object just as well as you can in a non-OOPL like
C. Think of this single object as your "global scope".

Mr. Martin's point is correct, but if I may amplify: The key here is that
OOP is a complete superset of SP. What is a standard structured program but
a collection of functions, along with data structures which are either
globally known (possibly only at file scope) or known internally in a
function (as either a local structure or one passed in as a parameter)? Yet
all these mechanics are supported within an object: all member functions
have access to the object's data ("global" or "file scope", and you can
still have multiple implementation files to refine that "file scope"), and
can each manipulate their own local (or parm-passed) data.

In other words, any structured program can be mapped to an OOPL
implementation by creating one object TheObject of class TheClass, wherein
we make data members out of all global and file-scope data structures and
prefix all functions with "TheClass::" (using C++ terminology). This
mapping preserves any abstractions or modularisations in the original.
(Incidentally, you can now create multiple instances of the whole program
since you've effectively encapsulated the program -- not good OOD since
something so monolithic is unlikely to be reusable, but an interesting
curiosity and one heck of an example of abstraction taken to the extreme.)

So, yes, it appears to me that you can write any non-OO program in any OOPL,
no matter that the OOPL you pick may force you to write in terms of classes
and objects only. This is simple a result of the fact that OOP is a
complete superset of SP.

---
Herb Sutter #include <std_disclaimer.h>
he...@interlog.com "Me? Paranoid? ... Uh, why do you ask?"

Raoul De Kezel

unread,
Apr 16, 1995, 3:00:00 AM4/16/95
to
In article <dewar.797512974@gnat>, de...@cs.nyu.edu (Robert Dewar) writes:
> TO make the argument that there is some inherent difference in language
> efficiency, you have to give very specific examples of semantic features
> and argue that it is impossible to implement comparable features with
> comparable efficiency. What any particular implementations happen to do
> is not part of the discussion.

I believe there is some inherent difference in language efficiency.
Bits of proof were given in this thread, but it may be useful to
recast them in the following argumentation :

1. In some cases, information about the future states of a computation
allows a compiler/translator/optimizer to generate faster code than
without the information.

2. If a programming language doesn't supply some means for the
programmer to give the information, the translator has to deduce
it from the program's text.

3. But it is in general impossible for a translator to deduce this
information, because computers are even worse as theorem provers
than we. This is deeply rooted in some well known limitations on
the deductive power of formal systems, most notably that computers
can't usually predict the future course of a computation.

4. So, if a language A allows a programmer to give some information
to a translator while a language B does not, and this information is
useful for some optimization, as the translator for B cannot in
general deduce the information from the program text, B is in some
sense inherently less efficient than A. Of course, B could be more
efficient than A in another area because it allows us to give another
kind of information, useful for another optimization.

An "instantiation" of the argument for the 32 bit integer is :

1. The knowledge of the fact that a variable is going to contain an
integer value only, which will always be representable with 32 bits,
allows a compiler to put it in a register and use the processor's
integer arithmetic instructions.

2. Standard Smalltalk doesn't allow me to say that a variable will
only reference (contain) a SmallInt, while C does.

3. Suppose I write a method which compute the first ten primes. I
don't expect any SmallTalk compiler to infer from the method's text that
the primes are all going to be representable with a SmallInt. If it
did, I would feed it with some other interesting problem, and maybe
get the Fields medal.

4. So, a SmallTalk compiler *must* generate code which checks at
run time whether some arithmetic operation resulted in a value greater
than 32 bits, in order to convert it to some LargeInteger, and so on,
while I would have told an hypothetical typed SmallTalk compiler
not to bother.

So, in this particular case, SmallTalk is inherently less efficient
than C. The difference between typed and typeless language was also
in my opinion a good example. I once wanted to write a type inferencer
for SmallTalk. I wanted it to deduce that the Points used in some
low level windows code contained only SmallInts. But I, playing
the inferencer, could not prove from the image source code only that
the Points were indeed going to contain only SmallInt. Again, in this
area, SmallTalk is inherently less efficient than C, because I cannot
say that the Points will contain only SmallInts, and nobody or nothing
can deduce it from the whole program text.

Of course, this is not an attack against SmallTalk, which I like a lot.
Standard C does not allow me to say that a variable isn't aliased,
and the compilers are not very good at proving that. (well, the problem
is not easy, which is all the point). Probably all languages are in this
sense less efficient than assembler.

So, I have given a very specific example (compute the first ten primes)
which I argue can be more efficiently implemented in C as in SmallTalk.
If you produce a compiler able to deduce that the first ten primes are
representable with 32 bits while the first billion ones are not, I'll
give you another similar challenge, until you prove that your compiler
is always going to correctly deduce an upper bound on the integers used
in my programs. I believe this is impossible.

Robert Martin

unread,
Apr 16, 1995, 3:00:00 AM4/16/95
to
he...@interlog.com (Herb Sutter) writes:

>So, yes, it appears to me that you can write any non-OO program in any OOPL,
>no matter that the OOPL you pick may force you to write in terms of classes
>and objects only. This is simple a result of the fact that OOP is a
>complete superset of SP.

And thus, a "pure" OOPL (if there is any such thing) is also a "pure"
structured language with an object wrapper around it. Thus, every
OOPL is a hybrid language.

;^)

Kenneth Almquist

unread,
Apr 16, 1995, 3:00:00 AM4/16/95
to
> Generics are not an object oriented feature. The generally accepted
> definition of an object oriented programming language is one that
> supports encapsulation, inheritance and polymorphism.

Generics provide a form of polymorphism. For example, in Ada you
can write:

generic
type List_Element is private;
package list_package is
type List is private;

[Define operations on Lists and List_Elements here.]

end list_package;

Then the list operations will work with whatever type of list element
you instantiate the package with.
Kenneth Almquist

Robert Dewar

unread,
Apr 16, 1995, 3:00:00 AM4/16/95
to
Matt, sure you can design languages that show fundamental efficiency
differences (I don't think Turing machines would be too efficient in
practice, and indeed any automoton lacking direct access is fundamentally
hampered in a well defined theoretical sense).

The question is how does this show up in real languages. So far two
reasonable responses:

o Provision of "big numbers" rather than machine size integers

o Better aliasing in a better typed language

Generally though, the interesting bottom line of this discussion is that
there *are* differences, but they are much less extensive and less
significant than one might imagine!


Peter Kron

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
rma...@rcmcon.com (Robert Martin) wrote:

>Crookie <g...@mfltd.co.uk> writes:

>>er, i thought the inherent difference in S/T (as opposed to Object COBOL
>>or C++) is that it does _everything_ by messages. even 2+2.

>The notion that in Smalltalk everything is objects and messages is
>often touted as the justification for calling it a pure OOPL. But the
>concept is nonsense. Methods are just functions. Messages are just
>function calls. Polymorphic message dispatch is just calling
>functions through pointers.

At the hardware level, this is correct: methods are implemented by
call/return semantics just like functions. However, the determination
of the called address is significantly different. In C++ the called
function is determined in part by the calling function, in that it
indexes a function table based on the definition of the target object
at a specific point in time. In Smalltalk the called function is
determined by the target object. This is more pure, in my view,
because it is more robust, ie. less sensitive to changes in
implementation of the target object.

>The difference between an OO language and a non OO language is that an
>OO language provides facilities for managing polymorphic dispatch and
>for encapsulating functions and data structures.

But there is room for degrees of "purity". C++ allows you to "cheat"
by mapping instance variables in the caller via inline functions.
Smalltalk does not. In C++ functions display different "degrees" of
polymorphism, depending on the use of the "virtual" keyword. Smalltalk
does not. These impurities can certainly be managed in single-app and
single-site projects, but it remains to be seen how manageable they
will prove to be in a large-scale component environment.

So I think a very good case can be made that C++ is not as "pure" as
Smalltalk. Note that this isn't to say that C++ is not an OOPL or that
it can't be used effectively...purity may not be the primary
consideration. Efficiency/purity is a tradeoff, and like all tradeoffs
involves costs which must be evaluated either way.
Peter...@corona.com
Corona Design, Inc.
Seattle, WA


Kennel

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
Robert Dewar (de...@cs.nyu.edu) wrote:
> "I don't think you would because the design of the language would not allow the
> kind of performance expected by the users."

> You keep making unsubstantiated claims like this, and when challenged to
> back up these claims, you simply quote observations about the behavior of
> your particular compiler.

> What is an example of an aspect of "the design of the language" that
> fudnamentally restricts performance. Please don't tell me any more about
> what your compiler can or cannot do. Instead quote a particular feature
> and explain at a theoretical level why you think that it restrics the
> maximum achievable performance.

OK.

Imagine a (moronic) language which did not have integer constants
anywhere; everything must be loaded in from external storage or computed
from these.

The compiler does NOT know about the contents of this initialization file,
whereas if they were written in the program, the compiler could propagate
many more constants and figure out much more at compile time.

This is an extreme example of the phenomenon in question.

Imagine this language had no value type integers or characters:
everything must be potentially referenceable, and those references traced
by the garbage collector. No exceptions allowed.

This is a not quite as extreme example.


matt


Robert I. Eachus

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
In article <D71Gs...@nntpa.cb.att.com> k...@socrates.hr.att.com (Kenneth Almquist) writes:

> Let me map out an alternative design. To begin with, I assume
> that each device has a set of drawing operations that it accepts.

> Presumably all devices permit drawing individual pixels, but


> higher level operations are likely to be much faster and therefore
> should be used if available. So this design has three sets which
> are expected to grow over time: the set of shapes, the set of
> devices, and the set of drawing operations.

Let me propose a third alternative. The designer of the system
defines:

1) A set of "low-level" primitives which must be supported for each
device, no matter how inefficiently.

2) A set of optional operations which may or may not have native
support for each device.

We define an abstract device type, with the first group of
operations declared abstract, and the second group having definitions
in terms of the first group.

For each device, derive from the common ancestor, define the first
operation set, and add those from the second set which are easy or
needed for efficiency.

For each shape, derive from a common abstact parent, and define the
drawing primitives using any operation from either set.

Gee, now we have a nice implementation which: requires O(1)
subroutine definitions for each new device type, O(1) subroutine
definitions for each new shape (in other words, the amount of work to
add a new device is independent of the number of shapes and
vice-versa), and uses Ada 95 single dispatching very nicely. The only
hard part is the initial partitioning of the drawing primitives, and
that is what standards are for.

John Max Skaller

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
In article <dewar.798087061@gnat>, Robert Dewar <de...@cs.nyu.edu> wrote:
>Matt, sure you can design languages that show fundamental efficiency
>differences
>
>The question is how does this show up in real languages. So far two
>reasonable responses:
>
> o Provision of "big numbers" rather than machine size integers
>
> o Better aliasing in a better typed language
>
>Generally though, the interesting bottom line of this discussion is that
>there *are* differences, but they are much less extensive and less
>significant than one might imagine!

I don't understand this. The differences between
languages like an assembler, C, Smalltalk, Eiffel, Postscript,
LISP, Prolog, SML, Unix shell ... are huge.

The differences are smaller between similar languages.
For example compiled procedural languages are probably similar.

It is my understanding that the movement to safe and
efficient compiled procedural languages involves provision
of run-time checking and compile time deduction to optimise
away the checking.

The major problems in this area are, IMHO, how to
design the language so that programmers can assist the compiler
in this task -- without rendering the meaning of the program
so obscure that the correctness of the program with respect to
the human intent is unverifiable.

There are other related movements. One is to
provide Garbage Collection, and I believe it is now
close to the point that automatic garbage collectors
are often _more_ efficient than manually coded storage
allocation systems -- apart from being more reliable.
The big problem seems to be real time response,
not amortised efficiency.

Anyhow "type safety" has been around for a while.
Languages like C++ epitomise highly efficient processing
with strong checking of type constraints -- while failing
to provide any checking where there is a run-time overhead
which the language does not provide much help to the compiler
to optimise away. (For example array index bounds).

Kennel

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
Robert Dewar (de...@cs.nyu.edu) wrote:
> Matt, sure you can design languages that show fundamental efficiency
> differences (I don't think Turing machines would be too efficient in
> practice, and indeed any automoton lacking direct access is fundamentally
> hampered in a well defined theoretical sense).

> The question is how does this show up in real languages. So far two
> reasonable responses:

> o Provision of "big numbers" rather than machine size integers

> o Better aliasing in a better typed language

> Generally though, the interesting bottom line of this discussion is that
> there *are* differences, but they are much less extensive and less
> significant than one might imagine!

There are a few "fundamental" differences, but I suspect that there are
many more "not-quite-fundamental-but-still-very-important-if-you-consider-
economic-reality" differences in languages.

For instance, in Fortran (77?), array sizes are just *defined* to be fixed
in a subroutine. The compiler can always assume this is true and you
get fast code for any loop.

Suppose you had an OO language implementing an array class whose array
bounds were attributes. Now, to get the same speed as Fortran a
compiler would have to prove that inside a loop those bounds wouldn't
get changed. (And through functions as well). Possible to do, but
maybe not implemented.

Even though most language decisions that make optimization more difficult
could be overcome with programmer sweat, that takes away time from other
quality of implementation areas or raises the cost.

This really matters.

mbk

Curtis Bass

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
rma...@rcmcon.com (Robert Martin) wrote:
>
> Curtis Bass <cbass%intmed...@mhost.utmb.edu> writes:
>
> >gt2...@prism.gatech.edu (Richard Ladd Kirkham) wrote:
> >>
> >> In article <D6uA7...@mcshub.dcss.mcmaster.ca> cs4g...@maccs.dcss.mcmaster.ca (Ross Driedger) writes:
> >> >
> >> > C++ . . . is not an OOL in the pure sense)
> >>
> >> Why?
>
> >Well, C++ does directly support all the mechanisms necessary
> >for object-orientation (encapsulation, abstraction, polymorphism,
> >and inheritance), but it still allows you to write procedural
> >(i.e. structured) code without requiring an object-oriented
> >design or implementation. In a "pure" OOPL, such as Smalltalk,
> >EVERYTHING is an object, and the object-oriented paradigm is
> >the ONLY one supported -- you cannot write strictly procedural
> >code in a 'pure" OOPL.
>
> Would that 'twere so. Actually, all OOPLs are conducive to the
> implementation of structured programs. This must be so. Consider
> that a structure program is an amalgam of data structures and
> functions. In an OOPL we could create a single object which contains
> all these data structures and functions. Voila, a structured program
> implemented in a OOPL.

I maintain that this is STILL not a "strictly procedural" model,
but is rather a very BAD OO model. The procedures (methods) ARE
still attached to an object, yes? The procedures (methods) are
STILL NOT independent entities, correct? Yes, you can "call" this
"procedural" coding, but then I suppose we can "call" ANYTHING
whatever we wanted to. The bottom line is that all that your
example illustrates is bad OO programming practice -- sort of
like have just ONE procedure Pascal. Yes, it can be done,
and we could even call the result "unstructured," but I would
not say that Pascal is "conducive" to monolithic, unstructured
programming, and I would not say that Smalltalk is "conducive"
to structured, procedural programming. Both are examples of
bad design, nothing more.

> You might think that nobody would ever *do* such a thing, but in fact
> it is fairly common practice for people to convert procedural legacy
> code into OOPLs by doing exactly this. First they get the design into
> an OOPL, and then they migrate it into an OOD.

The fact that it may be common doesn't mean that what they are
doing is "strictly procedural." I still maintain that it's nothing
more than bad OOD, albeit admittedly so, with the plan of being
migrated to "good" OOD.

> So, what then is a "pure" OOPL. My contention is that the term has no
> useful definition.

The fact that you can MIMIC procedural coding in Smalltalk does
not invalidate the definition. In Smalltalk, EVERYTHING is an
object. Procedural code does NOT, and CANNOT exist independent
of objects. In C++, you CAN have procedures that exist independently,
even in an ostensibly OO program. In C++, you do not have to
mimic or kludge a strictly procedural solution -- you can
write structured, procedural code "natively" (i.e. you don't
have to fake it).

The term DOES have useful a definition, and I have provided
one that's perfectly valid.

> >It's sort of like BASIC -- you CAN write structured code in
> >most modern BASIC dialects, using procedures and parameter-
> >passing mechanisms with local variables, but you can also
> >write old-fashioned, unstructured spaghetti BASIC, using
> >GOTO's, GOSUBS, line numbers, and all global variables. So,
> >we can say that BASIC is not a "structured" language in the
> >"pure" sense.
>
> Yes, just like C is not an OO language in the "pure" sense. But that
> was not the argument you were making. You were making the opposite
> argument. By analogy you were saying that a "pure" structured
> language would not allow the implementation of unstructured programs.

What I was doing was explaining why C++ was not an OOPL in the
pure sense -- nothing more, nothing less.

For the record, "C" is not an OOPL in ANY sense. Yes, we can
"do OO programming" in C, but WE have to do all of the "OO
work" ourselves.

As far as what I am saying" is concerned, what I am saying is
simply this: C++ IS Object-Oriented, but not in the "pure" sense.
Smalltalk IS "purely" Object-Oriented. You can write strictly
procedural code in C++, You CANNOT in Smalltalk -- the best
you can do in Smalltalk (or other "purely OOPL's) is to have
just ONE object with all of your procedural code attached to it.
THIS IS STILL OBJECT-ORIENTED, although very poorly so. This
would be like writing a single procedure in Pascal, a "purely
structured" language (yes, I know, we can have "goto's and
labels in Pascal, but just humor me on this). The result
would still be "structured" but very poorly so (i.e. it would
not be "spaghetti" code).

With modern BASIC, we can write structured code (so we can
call modern BASIC a "structured" language), but we can also
write spaghetti code (so it isn't "purely" structured).
We can say that modern basic is "structured, but not in the
'pure' sense." This parallel's C++'s being object-oriented,
but not in the pure sense. Do you get it?

> -------
>
> I invest a lot of time in these arguments. And that is strange
> because I consider them to be almost frivolous. However the reason I
> spend the time on them is that they are also "seductive". Not that
> people who use the term "pure" are being dishonest, or sneaky, but
> that the term "pure" is very emotive. Purity is most often equated
> with "goodness" or "simplicity". And these implications are not
> appropriate for describing the OO-ness of a language.

Perhaps we need a better term, such as "strictly OO" or "strictly
a structured language." I believe that it IS valuable to differentiate
between languages that straddle two paradigms (C++ straddles
the "structured" and "object-oriented" paradigms, BASIC straddles
the "unstructured" and "structured" paradigms), and those that
do not (C and Pascal are "strictly" structured languages, Smalltalk
is "strictly" an object-oriented language).

> I remember talking to a young man in High School a few years ago. I
> told him that I did a lot of work in C++. He told me that he was not
> interested in learning C++ because it was not a "pure" object oriented
> language.

This is unfortunate. However, you stated that this was a few
years ago. If this young man is STILL pursuing a job as a
programmer, by now he probably realizes that C++ is an
extremely viable skill to learn, even if it isn't a pure OOPL.
If he still considers C++ "unnworthy," then he is ignorant, and
there is little we can do about that.

> Because of that event, whenever anybody talks about the purity of an
> OOPL, this young man's face is conjured in my mind, and I feel the
> need to prevent someone from choosing or rejecting a language based
> upon a such a misguiding attribute.

Well, I repeat: Maybe we just need different terminology. I
admit that the term "pure" definitely has connotations of
value or "worthiness."

> --
> Robert Martin | Design Consulting | Training courses offered:
> Object Mentor Assoc.| rma...@rcmcon.com | Object Oriented Analysis
> 2080 Cranbrook Rd. | Tel: (708) 918-1004 | Object Oriented Design
> Green Oaks IL 60048 | Fax: (708) 918-1023 | C++

Curtis Bass
Software Systems Specialist II
University of Texas Medical Branch


Curtis Bass

unread,
Apr 17, 1995, 3:00:00 AM4/17/95
to
nco...@watson.ibm.com (Norman H. Cohen) wrote:
>
> In article <3mgnkc$e...@atlantis.utmb.edu>, Curtis Bass
> <cbass%intmed...@mhost.utmb.edu> writes:
>
> |> In a "pure" OOPL, such as Smalltalk,
> |> EVERYTHING is an object, and the object-oriented paradigm is
> |> the ONLY one supported -- you cannot write strictly procedural
> |> code in a 'pure" OOPL.
>
> Nonsense. It is easy to do a straightforward transliteration of a
> Fortran IV program into Smalltalk, for example.

Perhaps. However, I maintain that the result is NOT "purely
procedural." It has to be Object-Oriented, even if all of
your code is attached to only one object. This would be
like writing a single procedure in Pascal -- the resulting
program is not "unstructured," is it? The structure is not
"good," but it's better than spaghetti code.

The bottom line is that your "straightforward transliteration
of a Fortran IV program into Smalltalk" will STILL result in
an Object-Oriented program, although it will have bad OO
Design. There is no way around it -- your procedures cannot
exist as independent entities; they must be attached to an
object.

> --
> Norman H. Cohen nco...@watson.ibm.com

Curtis Bass

It is loading more messages.
0 new messages