Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Does Ada really need MULTIPLE inheritance?

4 views
Skip to first unread message

Norman H. Cohen

unread,
Oct 16, 1989, 9:57:09 AM10/16/89
to
Two recent workshops focused on adding support for object-oriented
programming to Ada. One, sponsored by MITRE Corporation, was held
September 11-13 at Woburn, Massachusetts. The Workshop on Language
Issues for Reuse: Ada for the 90's, sponsored by University of Maine
was held at Deer Isle, Maine, September 25-29. Specific proposals for
supporting object-oriented programming in Ada were discussed at each
workshop.

I'll leave discussion of the details to the final workshop reports, which
are now in preparation, but I do want to mention two areas of consensus
reached at both workshops:

(1) Ada should support inheritance.
(2) There is no convincing need for MULTIPLE inheritance.

Since nobody at either workshop was a strenuous advocate of multiple
inheritance, I would like to see the argument for multiple inheritance
presented here. Among the thoughts that contributed to the consensus
AGAINST multiple inheritance were:

(1) In many cases, the effect of multiple inheritance could be
achieved also by instantiating a generic unit defining a subclass
of a class given by a generic parameter. This allows different
classes to be extended in a uniform way.

(2) Multiple inheritance may seem essential in some object-oriented
languages because inheritance is the only importation mechanism.
Ada can get by with a WITH clause in many contexts where other
languages use multiple inheritance. In Eiffel, for example, one
inherits from class MATH, which has operations but no state data,
to achieve the effect of WITHing a math package in Ada.

(3) In some language proposals, multiple inheritance is a natural
generalization of single inheritance. In other proposals,
however, multiple inheritance is difficult to accommodate.
For example, the view that subclasses should be treated as Ada
subtypes has much going for it (for details, see my proposal
"Ada Subtypes as Subclasses," Research Report RC14912, which can
be obtained by writing to IBM Thomas J. Watson Research Center,
Distribution Services F-11 Stormytown, P.O. Box 218, Yorktown
Heights, NY 10598), but it is incompatible with multiple
inheritance in a strongly typed language like Ada.

(4) Many (including Bertrand Meyer) hold that the inheritance
hierarchy should reflect an "is-a" relationship. That is, each
instance of a subclass should be an instance of its immediate
superclass(es) as well. There are a few well-known examples
where the "is-a" relationship holds with multiple parents (a text
window is both a text file and a window, e.g.) but such examples
are rare, too rare to justify further complication of the
language.

(5) Multiple inheritance makes it difficult to determine the source
of a subclass's features, or to determine the impact of changing
the interface of a superclass. It tends to lead to undisciplined
software composition that may be fine for exploratory programming
but is unacceptable for the huge projects in which Ada is used,
projects that require strict configuration management.

Norman Cohen

David A Eichmann,316K

unread,
Oct 17, 1989, 1:23:53 PM10/17/89
to
From article <891016152...@ajpo.sei.cmu.edu>, by NCO...@IBM.COM ("Norman H. Cohen"):
...

> Since nobody at either workshop was a strenuous advocate of multiple
> inheritance, I would like to see the argument for multiple inheritance

OK, let's say I'm creating refinements of a general concept of vehicle,
which contains two attributes, means of propulsion(motor, air, etc.) and
operating medium(land, air, water). I can make an initial refinement
of air vehicles (adding appropriate attributes), land vehicles, and water
vehicles - refining by medium; or I can refine to motor vehicles, air
(powered) vehicles, etc - refining by propulsion. With single inheritance,
I must choose one over the other. With multiple inheritance, there is
no need. A motorized air vehicle is an instance of both air vehicle AND
an instance of motorized vehicle. My initial design choice is not coloring
subsequent interpretations and perspectives on the objects in question.


-----
David Eichmann
Dept. of Statistics and Computer Science
West Virginia University Phone: (304) 293-3607
Morgantown, WV 26506 Email: eich...@a.cs.wvu.wvnet.edu

David Chase

unread,
Oct 17, 1989, 4:49:41 PM10/17/89
to
I'm not well-versed in Ada, but people I've talked to ensure me that
Ada has something similar to the "opaque types" of Modula-2 and
Modula-3. Combining opaque types and multiple inheritance can lead to
some nasty problems. I don't have a solution; just a problem.

I'm assuming that by "multiple inheritance", the people discussing it
means the ability to (1) extend an existing type or (2) refine an
abstract (virtual, in the C++ lingo) type. In the first case one
might specialize "vehicle" into "car" and "truck"; in the second case,
one might provide a method for "compare" for an abstract "Comparable",
and take advantage of all the code already written to deal with
Comparable objects.

It's the second case, and overriding of existing message-method
bindings in general, that causes the problem.

Suppose I have a type T1, and it has a binding message-method binding
M1-m1a (M1 is the message, m1a is the method for it). Supposing I
have a type T2 which inherits from T1, and uses the M1-m1a binding.

T1 = OBJECT METHODS M1() := m1a END;

T2 = T1 OBJECT METHODS ... END;

Now suppose I export T2 opaquely, along with a procedure P2 which does
something useful with a T2.

OPAQUE OBJECT T2; (* I'm making up syntax as I go along *)
PROCEDURE P2(x:T2) ...;

Now suppose I have a type T3 which inherits from T2, and also inherits
from T1, but uses the message-method binding M1-m1b. I claim that (1)
this should be legal, because the programmer has no way of knowing
that it should be illegal (information hiding, right?) (2) An object O
of type T3 should have the binding M1-m1b where it is visible that O
is a T3 (because that's what the programmer said) (3) An object O of
type T3 should have the binding M1-m1a whenever it is in a context
where it is known to be a T2, but not known to be a T3 (because the
programmer should not be able to invalidate the correctness of a
module if the internals of the module are hidden).

That is (elaborating (3)) if P2 is called, then within P2 the binding
should be M1-m1a, and if O.M1 is sent in that context, then within the
code for m1a the binding should still be M1-m1a. That is, a second
time, changes to the method bindings for T1 by a subtype T3 *must not*
change the behavior of T3 when considered as a T2; any proofs about
the behavior of a T2 would thus go out the window (and the programmer
would be clueless, because the dependence of T2 on T1's message-method
bindings is hidden).

T3 = T1, T2 OBJECT METHODS M1 := m2b END;

at some point in the program,

VAR x : T3;
...
P2(x); (* x MUST act as a T2 in P2, including the binding of
M1 to m1a. *)
...
x.M1() (* This MUST call method m1b, not m1a. *)

I am *not* arguing that this should be the case if T2 is not opaque;
in that case everything is in the clear, and either an error message
or a change in T2's behavior is allowed. This nasty situation could
be avoided by creative prohibition (objects cannot be opaquely
exported -- yuck; opaque types cannot be inherited -- yuck; no
multiple inheritance -- many people say yuck, but that's what we live
with in Modula-3), but I'd be even happier if I could figure it out.

David

William Thomas Wolfe, 2847

unread,
Oct 18, 1989, 10:27:47 PM10/18/89
to
From NCO...@IBM.COM (Norman H. Cohen):

> Two recent workshops focused on adding support for object-oriented
> programming to Ada. [...] I do want to mention two areas of consensus

> reached at both workshops:
> (1) Ada should support inheritance.
> (2) There is no convincing need for MULTIPLE inheritance.
> Since nobody at either workshop was a strenuous advocate of multiple
> inheritance, I would like to see the argument for multiple inheritance
> presented here.

David Eichmann has presented one; a few more appear below.

> Among the thoughts that contributed to the consensus
> AGAINST multiple inheritance were:
>
> (1) In many cases, the effect of multiple inheritance could be
> achieved also by instantiating a generic unit defining a subclass
> of a class given by a generic parameter. This allows different
> classes to be extended in a uniform way.
>
> (2) Multiple inheritance may seem essential in some object-oriented
> languages because inheritance is the only importation mechanism.
> Ada can get by with a WITH clause in many contexts where other
> languages use multiple inheritance. In Eiffel, for example, one
> inherits from class MATH, which has operations but no state data,
> to achieve the effect of WITHing a math package in Ada.

I would submit that the generic mechanism and the multiple inheritance
mechanism are conceptually orthogonal; trying to use generics to simulate
multiple inheritance is like trying to simulate recursion with a stack,
etc.; although it may be possible, its desirability in terms of program
clarity is highly questionable.



> (3) In some language proposals, multiple inheritance is a natural
> generalization of single inheritance. In other proposals,
> however, multiple inheritance is difficult to accommodate.
> For example, the view that subclasses should be treated as Ada
> subtypes has much going for it (for details, see my proposal
> "Ada Subtypes as Subclasses," Research Report RC14912, which can
> be obtained by writing to IBM Thomas J. Watson Research Center,
> Distribution Services F-11 Stormytown, P.O. Box 218, Yorktown
> Heights, NY 10598), but it is incompatible with multiple
> inheritance in a strongly typed language like Ada.

Subclassing and subtyping are two somewhat similar, but distinct
concepts; both need to be properly supported. I see no reason why
this could not be accomplished.

> (4) Many (including Bertrand Meyer) hold that the inheritance
> hierarchy should reflect an "is-a" relationship. That is, each
> instance of a subclass should be an instance of its immediate
> superclass(es) as well. There are a few well-known examples
> where the "is-a" relationship holds with multiple parents (a text
> window is both a text file and a window, e.g.) but such examples
> are rare, too rare to justify further complication of the
> language.

I would argue first that such examples are not necessarily rare;
it may simply be that the lack of an ability to express the idea
at all has greatly restricted the amount of time spent considering
the possibilities, and that many valid applications of the concept
will emerge once its expression becomes possible.

> (5) Multiple inheritance makes it difficult to determine the source
> of a subclass's features, or to determine the impact of changing
> the interface of a superclass. It tends to lead to undisciplined
> software composition that may be fine for exploratory programming
> but is unacceptable for the huge projects in which Ada is used,
> projects that require strict configuration management.

Determining the source of a subclass's operations can be done
with an appropriate tool automatically. Determining the impact
of changing the interface of a superclass is similar to the problem
of determining the impact of a change to a package interface; when
the syntactic interface to a package changes, there is a "direct"
impact in that certain portions of code (the code withing the
modified spec) must be recompiled. However, this does not bound
the *semantic* impact; semantic modifications can be made without
even causing recompilation, and can propagate throughout the system
essentially unchecked. Consider what would happen if something as
heavily relied upon as a mathematics package were to be semantically
modified; newly linked programs everywhere could begin to fail through
absolutely no fault of their own, whether because of subprograms that
failed catastrophically as a direct result of the modification or
because of subprograms that began to deliver subtly incorrect results.

The only answer to this, I think, would be to incorporate something
like Anna/TSL into the definition of Ada, and even then the costs of
specifying and enforcing semantics so tightly would be considerable.
In short, there's no easy answer, and no escape from the problem.

Finally, the point about it tending to lead to undisciplined software
composition is certainly not without supporting evidence, in that
the capability has so far been most extensively used in support of
exploratory programming. But it is not my purpose to enable
exploratory programming to be done; my objective is to enable the
natural expression of real-world relationships, and also to enable
the systematic reuse of concepts which have already been expressed.

Perhaps ideas will emerge which will lead to a means of restricting
undisciplined use of multiple inheritance, just as ideas such as
doing type-checking and making any arithmetic difficult have
successfully restricted the undisciplined use of pointers.


Bill Wolfe, wtw...@hubcap.clemson.edu

William Thomas Wolfe, 2847

unread,
Oct 21, 1989, 3:39:05 PM10/21/89
to

[For those people in newsgroups other than comp.lang.ada who are
also seeing this article: this thread arose in the context of a
discussion in comp.lang.ada regarding the addition of multiple
inheritance to Ada as part of the current Ada 9X revision process.]

From ch...@Ozona.orc.olivetti.com (David Chase):

> I'm not well-versed in Ada, but people I've talked to ensure me that
> Ada has something similar to the "opaque types" of Modula-2 and
> Modula-3. Combining opaque types and multiple inheritance can lead
> to some nasty problems. I don't have a solution; just a problem.

OK, first I'll rephrase the problem in a way that Ada people
will find easier to follow. David assumes that there are two
uses for inheritance; the first is specialization, in which
one would take the abstraction "vehicle" and provide additional
operations in order to define abstractions such as "car" or "truck".
The second is derivation, in which one would take an existing
abstraction and shape it into a new abstraction, using the
implementations provided by the old abstraction as appropriate
and overriding operations with new implementations as necessary.

> It's the second case, and overriding of existing message-method
> bindings in general, that causes the problem.

The overriding of the current implementation for only certain
operations of a type does cause problems in that in general,
a good implementation for a given data abstraction will take
maximum advantage of the relationships between the various
operations, thus creating an implementation in which the data
structures and algorithms involved depend strongly on the fact
that exactly that set of operations is to be supported. For
example, a standard queue abstraction might be implemented by
a singly-linked list structure in conjunction with a descriptor
which points to the first and last elements in the list, in order
to efficiently support only the operations Enqueue and Dequeue.
But now suppose that we wish to support a priority queue instead,
and we wanted to override the existing Dequeue operation with a
new one. The new operation would scan the entire list in order
to find the element having the highest priority, and then dequeue
that element. Unfortunately, this is an O (n) algorithm, which
is quite inefficient relative to the O (log n) algorithms which
could be used if the new abstraction was supported by a binomial
forest rather than a linked list.

This represents the result of a tradeoff. By using inheritance
in conjunction with the implementation of the old abstraction, we
have traded off implementation quality in exchange for speed of
implementation. The priority queue implementation obtained via
the overriding of inherited operations is of very low quality,
but this may or may not be an important consideration relative
to the cost of providing a high-quality implementation.

It has long been recognized that the time to worry about product
efficiency is AFTER the product has been developed and put through
a profiler to determine where the bottlenecks are in the system,
since in this way the high cost of maximizing efficiency can be
directed to the points at which it will do the most good. This
principle extends to our derived priority queue implementation;
if the profiler indicates that a software system is running with
unacceptably slow speed due primarily to the time required to do
priority queue operations, then a more sophisticated implementation
of the priority queue which takes maximum advantage of the
relationships between the set of required operations is probably
a good idea. We then apply economic principles once again and
try to find a software components supplier which will sell us a
high-performance priority queue implementation, complete with
formal specifications for each operation and timing results for
typical workloads. In general, the components supplier will even
be able to offer a number of different implementations, some of which
will emphasize the particular operations which are most crucial to
your system at the expense of others which are not as important.

Now back to the problem under consideration... :-)

> Suppose I have a type T1, and it has a message-method binding [...]

Rephrasing, here is the problem:

package Example is -- this is the specification

type T2 is limited private; -- can't know or use the details

procedure DO_SOMETHING_WITH (Parameter : in out T2);

end Example;

package body Example is -- this is the implementation

-- type T1 has the operation OP1, and implementation IMP1.

-- type T2 inherits operation OP1 and implementation IMP1 from T1.

end Example;


-- Now the user of package Example declares a new type, T3,
-- which is_a T1 and also is_a T2. The user then specifies
-- a new implementation, IMP2, for OP1.



> I claim that (1) this should be legal, because the programmer has
> no way of knowing that it should be illegal (information hiding,

> right?) (2) An object O of type T3 should have the binding [OP1-IMP2]


> where it is visible that O is a T3 (because that's what the programmer

> said) (3) An object O of type T3 should have the binding [OP1-IMP1]


> whenever it is in a context where it is known to be a T2, but not
> known to be a T3 (because the programmer should not be able to
> invalidate the correctness of a module if the internals of the

> module are hidden). [In other words, if procedure DO_SOMETHING_WITH
> is called, which expects a parameter of type T2, then IMP1 should be
> used for the object of type T3 because it is expected to perform as
> a T2 in that context.] [... C]hanges to the [implementation] bindings

> for T1 by a subtype T3 *must not* change the behavior of T3 when
> considered as a T2; any proofs about the behavior of a T2 would
> thus go out the window (and the programmer would be clueless, because

> the dependence of T2 on T1's [specification-implementation] bindings
> is hidden). [...]


>
> I am *not* arguing that this should be the case if T2 is not opaque;
> in that case everything is in the clear, and either an error message
> or a change in T2's behavior is allowed. This nasty situation could
> be avoided by creative prohibition (objects cannot be opaquely
> exported -- yuck; opaque types cannot be inherited -- yuck; no
> multiple inheritance -- many people say yuck, but that's what we live
> with in Modula-3), but I'd be even happier if I could figure it out.

First, let me give a reference to an important tutorial regarding
the "synthesis of typing systems", as found in "Ada, an advanced
language incorporating extensive support for various forms of
abstraction", with "the powerful and flexible capabilities of OOP
[object-oriented-programming]", which is entitled "Type Theories
and Object-Oriented Programming" and can be found in ACM Computing
Surveys, Vol. 20, No. 1, March 1988.

Actual progress toward this objective is described by Wilf R. LaLonde
in "Designing Families of Data Types Using Exemplars" (ACM Transactions
on Programming Languages and Systems, Vol. 11, No. 2, April 1989), who
argues that "designing families of data types is a programming-in-the-
large problem", that "class-based systems need the notion of private
types if they are to surmount their current limitations", and that "an
advanced system should support the programming maxim that designers can
switch implementations as they choose without impacting users. We show,
using the implementation technique that we call programming by exemplars,
that conventional class-based systems cannot help but violate the maxim.
Exemplar-based systems support it easily.".

Finally, I agree with each of your claims (1), (2), and (3), and in
my opinion they are a natural consequence of the information hiding
concept. As far as T3 is concerned, T2 does not HAVE any operations
other than procedure DO_SOMETHING_WITH. Therefore, there can be no
conflict between T3's inheritance of OP1 from T1, T3's provision of
IMP2 for OP1, and T3's status as a T2 having the single operation
DO_SOMETHING_WITH. The fact that T2 was declared internally to be
an object having the operation OP1 as inherited from T1 is hidden from
T3, and therefore is only of consequence within the package body which
provides an implementation of T2 and its associated operation. Within
that operation, we know that the parameter is of type T2, and therefore
this "view" of T3 as a T2 will be supported. The T2 capabilities are
visible to procedures such as DO_SOMETHING_WITH which have the needed
authorization to view T2s, while also being hidden from T3 and from
anything else which is external to the T2 implementation.


Bill Wolfe, wtw...@hubcap.clemson.edu

P.S. I will be in Pittsburgh during the week of October 23-27
for the Tri-Ada '89 conference, and therefore will not be
able to respond to any followups until at least October 30th.

Vinod Grover

unread,
Oct 22, 1989, 3:00:04 AM10/22/89
to
In article <68...@hubcap.clemson.edu> billwolf%hazel.cs.c...@hubcap.clemson.edu writes:
> It has long been recognized that the time to worry about product
> efficiency is AFTER the product has been developed and put through
> a profiler to determine where the bottlenecks are in the system,
> since in this way the high cost of maximizing efficiency can be
> directed to the points at which it will do the most good.

For those of us without profilers, I suppose, there is no hope. Or perhaps
we should worry about efficiency before the product has been developed, or
perhaps not to worry about efficiency at all.

stt@inmet

unread,
Oct 22, 1989, 2:03:00 PM10/22/89
to

"It has long been recognized..." by whom?
If there is any lesson I have learned myself over the
years is that if you don't worry about efficiency from
the beginning, you will never achieve aggressive performance
goals.

-Tucker Taft
Intermetrics, Inc.
Cambridge, MA 02138

Barry Margolin

unread,
Oct 22, 1989, 4:22:05 PM10/22/89
to
In article <126...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:
>For those of us without profilers, I suppose, there is no hope. Or perhaps
>we should worry about efficiency before the product has been developed, or
>perhaps not to worry about efficiency at all.

Making a program efficient without a profiler is like making a program work
without a debugger. Sure, you can manually sprinkle your code with print
statements and timing statements, but it's not an efficient way to do it.

Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Ted Dunning

unread,
Oct 22, 1989, 8:39:43 PM10/22/89
to

In article <31...@news.Think.COM> barmar@kulla (Barry Margolin) writes:

In article <126...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:

^^^^^^^^^^^^^^^


>For those of us without profilers, I suppose, there is no hope. Or perhaps
>we should worry about efficiency before the product has been developed, or
>perhaps not to worry about efficiency at all.

Making a program efficient without a profiler is like making a program work
without a debugger. Sure, you can manually sprinkle your code with print
statements and timing statements, but it's not an efficient way to do it.

no kidding. but how is it that this guy from sun doesn't think that
he has any profilers?
--
t...@nmsu.edu
Dem Dichter war so wohl daheime
In Schildas teurem Eichenhain!
Dort wob ich meine zarten Reime
Aus Veilchenduft und Mondenschein

Dick Karpinski

unread,
Oct 26, 1989, 8:04:07 PM10/26/89
to
In article <126...@sun.Eng.Sun.COM> gro...@sun.UUCP (Vinod Grover) writes:

It seems obvious to me that such of us that have that problem should
fix it by making a profiler. Failing sufficient interest to do that
job well, I have a simple technique for making inserted timing probes
do the necessary work. My approach installs numbered probes into the
application to collect timing information from the system and build
a RAM table which is dumped in human readable form at end of run. In
extreme cases (such as no clock) human keystrokes can be used to fake
the clock. Remember, we only need crude information to detect which
components are using most of the time. I'll happily tutor folks who
need to do this by phone or email.

Dick

Dick Karpinski Manager of Minicomputer Services, UCSF Computer Center
Domain: di...@cca.ucsf.edu (415) 476-4529 (11-7)
BITNET: dick@ucsfcca or dick@ucsfvm (415) 658-6803 (Home)
USPS: U-76 UCSF, San Francisco, CA 94143-0704 (415) 658-3797 (ans)

Paul Pedersen

unread,
Oct 27, 1989, 11:44:28 AM10/27/89
to
In article <31...@news.Think.COM> barmar@kulla (Barry Margolin) writes:
>
>Making a program efficient without a profiler is like making a program work
>without a debugger. Sure, you can manually sprinkle your code with print
>statements and timing statements, but it's not an efficient way to do it.
>

I spent some time last year trying to profile a large UNIX-based system
running on a 386. Turned out to be a waste of time since profiling is
driven by UNIX clock-ticks which, at least on our system, is 100t/sec.

I found that the software (under test) could execute up to 30 function
calls between ticks, so most functions never got hit by the interupt, and
all time-spent-in-function's were meaningless.

Speeding up the interrupt frequency is apparently not a good idea since
most time will then be spent in context-switch's.

The conclusion I came to was that it is not possible to do profiling
using UNIX on a fast processor, short of using a logic analyzer and a
lot of interpreting software (far from sure that this is even feasable).

I would be *very* interested in hearing from anybody who has solved this
problem (I gave up) :-)

Paul Pedersen (pedersen@philmtl)

Robert Firth

unread,
Oct 27, 1989, 1:50:37 PM10/27/89
to
In article <8...@philmtl.philips.ca> pede...@philmtl.philips.ca (Paul Pedersen) writes:

>The conclusion I came to was that it is not possible to do profiling
>using UNIX on a fast processor, short of using a logic analyzer and a
>lot of interpreting software (far from sure that this is even feasable).
>
>I would be *very* interested in hearing from anybody who has solved this
>problem (I gave up) :-)

Gee, I implemented profiling on this Unix machine in an afternoon.
Admittedly, I was helped by the existence of a system call 'profil',
which does all the real work.

The purpose of profiling is to find out where the program is spending
a lot of its time. Now, the profiling command samples the program
counter 100 times a second, so in a one-minute run that is 6000
samples. Any subprogram body that consumes more than 1% of the mill
will therefore generate on average more than 60 hits, which is surely
a meaningful number.

If no program run ever takes more than a minute, few Unix users will
care about having it tuned. And if a given function uses less than
1% of the mill, you don't care about tuning it.

Barry Margolin

unread,
Oct 27, 1989, 6:32:33 PM10/27/89
to
[Note that I've directed followups to comp.lang.misc, as this no longer
seems to be about Ada.]

In article <46...@bd.sei.cmu.edu> fi...@sei.cmu.edu (Robert Firth) writes:
>In article <8...@philmtl.philips.ca> pede...@philmtl.philips.ca (Paul Pedersen) writes:
>>The conclusion I came to was that it is not possible to do profiling
>>using UNIX on a fast processor, short of using a logic analyzer and a
>>lot of interpreting software (far from sure that this is even feasable).
>>I would be *very* interested in hearing from anybody who has solved this
>>problem (I gave up) :-)
>Gee, I implemented profiling on this Unix machine in an afternoon.
>Admittedly, I was helped by the existence of a system call 'profil',
>which does all the real work.
>
>The purpose of profiling is to find out where the program is spending
>a lot of its time. Now, the profiling command samples the program
>counter 100 times a second, so in a one-minute run that is 6000
>samples. Any subprogram body that consumes more than 1% of the mill
>will therefore generate on average more than 60 hits, which is surely
>a meaningful number.

This is called "statistical program counter metering" on the system I use
(Symbolics Lisp Machines); until last year it was the only metering
facility provided by Symbolics. They implemented it using microcode
microtasks (Symbolics 36xx machines can run up to 5 concurrent microtasks)
rather than interrupts, so the overhead is virtually nil and it samples
very frequently.

To be most useful, statistical PC metering should permit you to specify the
frequency, particularly if your application contains loops (how many
significant applications don't?). This is because the period of the loop
might be close to the period of the PC sampling, and you would get
distorted data.

A way to get more precise metering is to make use of a trap-on-call/return
facility. This can be implemented in hardware (as on the Lispm) or in
software (the subroutine calling sequence could check a flag), or in the
compiler (an option to the compiler tells it to generate metering code).
Whenever a function call is made the trap handler records the time, and
when it returns the elapsed time can be recorded. To get more precise
timings, of course, you need a more precise clock; what you need is an
independent high-frequency clock chip that can be read, rather than a clock
that simply interrupts the processor every Nth of a second. Symbolics
machines have a 60th-of-a-second interrupting clock to drive the scheduler
and time-of-day clock, and a separate microsecond clock that can be read on
demand (there's also a third clock, called the calendar clock, running in
the front end processor, for use when Lisp is shut down).

Another way of metering is to just count the number of times particular
functions are called. This can be done using the same mechanisms as the
above, but it doesn't require the high-frequency clock. Instead of reading
the clock, the trap handler just increments a counter for the function
being called. This is often useful for determining where the likely hot
spots will be.

Paul Baker

unread,
Oct 30, 1989, 12:12:59 PM10/30/89
to

>From article <891016152...@ajpo.sei.cmu.edu>,
> by NCO...@IBM.COM ("Norman H. Cohen"):
> Since nobody at either workshop was a strenuous advocate of multiple
> inheritance, I would like to see the argument for multiple inheritance

I attended one of these workshops and didn't advocate multiple
inheritance. The response to Norm's remark on this network seems to be
based on the notion that the power of multiple inheritance was
overlooked at these workshops. If my memory serves, the issue on the
table was the value of this feature in the context of its dangers and
the fact that it is not essential for OOP. Its power was recognized.

The danger is that there is currently no rigorous notion of how we shall
interpret the fusion of methods from alternative lineages. CLOS offers
a fixed and reasonable interpretation of the order of delegation
but the order is not based on any convincing metaphor. In the face of
this problem, it will be difficult to teach design for multiple
inheritance and hard for one developer to maintain another's system.

Multiple inheritance is not essential because its effects can be
achieved by defining a new class whose slots include components from
various classes. The developer must then define all the new methods -
none are inherited automatically. Naturally, that new definition can
be a delegation to one of the ancestor classes. Thus, composition
achieves the same functional effect as inheritance but requires an
explicit explanation of the intended fusion of methods.

----------
Paul L. Baker
CTA INCORPORATED
E-MAIL: bak...@ajpo.sei.cmu.edu PHONE: 301-816-1242

William Thomas Wolfe, 2847

unread,
Oct 31, 1989, 2:23:15 PM10/31/89
to
From ba...@IDA.ORG (Paul Baker):

> The danger is that there is currently no rigorous notion of how we shall
> interpret the fusion of methods from alternative lineages. CLOS offers
> a fixed and reasonable interpretation of the order of delegation
> but the order is not based on any convincing metaphor. In the face of
> this problem, it will be difficult to teach design for multiple
> inheritance and hard for one developer to maintain another's system.

I would consider the solution to be rather straightforward:
simply interpret the inheritance system in terms of a visibility
metaphor. Consider the following:

New_Type is_a Type_1, Type_2, Type_3;

Now any operation which was provided in Type_1 and also provided
(with the same "signature") in Type_2 will be unaffected at the
specification level (because both specifications are the same),
but the Type_2 implementation will obscure the Type_1 implementation
(which "became visible" first as a result of being listed first).

Similarly, any operation provided by Type_3 and also provided by
Type_1 or Type_2 will have no effect at the specification level,
but will obscure whichever implementation had previously been visible.

Now given that implementations for the inherited operations are
already visible, New_Type's implementation need not provide any
explicit implementation statements at all if the visibility rules
have established the desired correspondence between specification
and implementation already. Suppose that this is not the case --
for example, the Type_1 implementation of Do_Something is desired
rather than the Type_3 implementation. Then New_Type's body can
simply make a statement along the lines of:

operation Do_Something is Type_1.Do_Something;

Of course, if Do_Something has an involved parameter signature
(parameters with default values, perhaps a returned result), the
entire signature would have to be specified. But the idea is easy
to grasp: we exploit the existing knowledge of the visibility
metaphor in such a way as to maximize the convenience of programming
while also maximizing readability.



> Multiple inheritance is not essential because its effects can be
> achieved by defining a new class whose slots include components from
> various classes. The developer must then define all the new methods -
> none are inherited automatically. Naturally, that new definition can
> be a delegation to one of the ancestor classes. Thus, composition
> achieves the same functional effect as inheritance but requires an
> explicit explanation of the intended fusion of methods.

Fortunately, by exploiting the visibility metaphor, we can eliminate
the need to require such lengthy explanations of what is to be done.

Also, I see this system as actually being less natural, and hence
more difficult to teach, than the alternative described above. It
requires a shift away from the intuitive frame of mind which gives
us our desired linkage between software systems and the real world.

With the visibility mechanism, intuition is in full force; one can
actually "see" what is happening as the relationship between the new
type and the parent types is visualized. Only the linkages which do
not correspond directly to the general ordering of the relationship
will have to be specified; this permits the very rapid perception of
the "big picture", followed by smaller local modifications. Thus, we
start out with an excellent first approximation of the overall structure
of the relationship, and converge very quickly onto the final result.

This would seem to contrast with an essentially linear scan of the
various operations, once the initial "chunk" acquired by the use of
"single" inheritance is absorbed. By explicitly enumerating all of
the new operations, we have an excellent probability of violating the
"seven, plus or minus two" rule regarding the limitations of human memory.


Bill Wolfe, wtw...@hubcap.clemson.edu

Richard O'Keefe

unread,
Oct 31, 1989, 9:54:09 PM10/31/89
to
In article <69...@hubcap.clemson.edu>, billwolf%hazel.cs.c...@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) writes:
: From ba...@IDA.ORG (Paul Baker):

: > The danger is that there is currently no rigorous notion of how we shall
: > interpret the fusion of methods from alternative lineages.
: I would consider the solution to be rather straightforward:

: simply interpret the inheritance system in terms of a visibility
: metaphor. Consider the following:
:
: New_Type is_a Type_1, Type_2, Type_3;
:
: Now any operation which was provided in Type_1 and also provided
: (with the same "signature") in Type_2 will be unaffected at the
: specification level (because both specifications are the same),
: but the Type_2 implementation will obscure the Type_1 implementation
: (which "became visible" first as a result of being listed first).

This doesn't *solve* the problem, it *states* it. Suppose Type_1 provides
operations P and Q, and the implementation of P uses Q. Suppose Type_2
provides operations Q and R, and the implementation of R uses Q. Now
suppose I call the P operation in my New_Type instance. In some
multiple inheritance languages, it would get Q from Type_1; in some of
them it would get Q from New_Type, which amounts to getting it from Type_2.

If you pick the "operations inherited from Type_1 'see' only operations
inherited from Type_1" model, then you might as well use traits rather
than multiple inheritance. If you pick the "operations inherited from
a supertype see the definitions visible in the subtype" model -- which
is precisely the model you get for "virtual functions" in Simula, C++,
and for methods in Smalltalk -- then you cannot verify the implementation
of Type_1 without having access to every Type_2 that might be inherited
together with it.

William Thomas Wolfe, 2847

unread,
Nov 1, 1989, 2:10:52 PM11/1/89
to
From o...@cs.mu.oz.au (Richard O'Keefe):

> : New_Type is_a Type_1, Type_2, Type_3;
> : Now any operation which was provided in Type_1 and also provided
> : (with the same "signature") in Type_2 will be unaffected at the
> : specification level (because both specifications are the same),
> : but the Type_2 implementation will obscure the Type_1 implementation
> : (which "became visible" first as a result of being listed first).
>
> This doesn't *solve* the problem, it *states* it.

Go back and read the article again. This only establishes the
*default*; there was (immediately after the passage you cited
above) a section mentioning a statement to the effect:

operation Do_Something is Type_1.Do_Something;

which overrides the default binding (to Type_3 in the example)
and binds Do_Something to the implementation of Type_1 instead.


Bill Wolfe, wtw...@hubcap.clemson.edu

William Thomas Wolfe, 2847

unread,
Nov 1, 1989, 2:38:43 PM11/1/89
to
From o...@cs.mu.oz.au (Richard O'Keefe):
> This doesn't *solve* the problem, it *states* it. Suppose Type_1 provides
> operations P and Q, and the implementation of P uses Q. Suppose Type_2
> provides operations Q and R, and the implementation of R uses Q. Now
> suppose I call the P operation in my New_Type instance. In some
> multiple inheritance languages, it would get Q from Type_1; in some of
> them it would get Q from New_Type, which amounts to getting it from Type_2.
>
% If you pick the "operations inherited from Type_1 'see' only operations
% inherited from Type_1" model, then you might as well use traits rather
% than multiple inheritance. If you pick the "operations inherited from
% a supertype see the definitions visible in the subtype" model -- which
% is precisely the model you get for "virtual functions" in Simula, C++,
% and for methods in Smalltalk -- then you cannot verify the implementation
% of Type_1 without having access to every Type_2 that might be inherited
% together with it.

If Type_1 is a limited private type, then the implementation of
P should use Type_1.Q; if not, then assuming that Type_2.Q is
used as the implementation of New_Type.Q, Type_2.Q should be
called by New_Type.P's implementation.

Thus, the implementation of Type_1 can be completely verified
by making it into a limited private "sealed" system. I think
there is also a good case for making the compiler produce a
warning whenever this sort of inter-operation dependency is
detected with respect to operations inherited from different
(non-limited private) sources.


Bill Wolfe, wtw...@hubcap.clemson.edu

Richard O'Keefe

unread,
Nov 1, 1989, 11:40:33 PM11/1/89
to
In article <69...@hubcap.clemson.edu>, billwolf%hazel.cs.c...@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) writes:
> From o...@cs.mu.oz.au (Richard O'Keefe):
> > This doesn't *solve* the problem, it *states* it.
>
> Go back and read the article again. This only establishes the
> *default*; there was (immediately after the passage you cited
> above) a section mentioning a statement to the effect:
>
> operation Do_Something is Type_1.Do_Something;

That does not address the problem. The problem is not the meaning of
Do_Something in Type_3 (the subtype). The problem is whether operations
inherited from a supertype can see only operations defined in that
supertype or whether they see the definitions visible in the subtype
(like 'virtual' functions and methods usually do). You have to say
what you want in Type_1 (the supertype), not Type_3 (the subtype).

William Thomas Wolfe, 2847

unread,
Nov 4, 1989, 1:06:23 PM11/4/89
to
From me, earlier:
> [how to reconcile inheritance with verification?]
> [...] the implementation of Type_1 can be completely verified

> by making it into a limited private "sealed" system.

Actually, it seems that there would be a need for another dimension
to "limited privacy": the current dimension (for protecting ADTs
against user errors), and a new dimension (for protecting ADTs from
inheritance-based modifications); new syntax would have to be found
to express the designer's intent with respect to both dimensions.

Incidentally, my first response to Richard's article was oriented
toward the problem originally under consideration (how to specify
the resolution of inheritance conflicts) rather than the different
problem Richard had in mind (how to resolve the conflict between
inheritance and verification); this was due solely to an error on
my part. So far I have _still_ been unable to get the article killed;
perhaps one day the vn news-reading program will provide its users
with direct article-kill capabilities. I deeply regret the error,
and I hope that it will not negatively impact the very important
discussion about providing object-oriented capabilities in Ada 9X.


Bill Wolfe, wtw...@hubcap.clemson.edu

Dave Davis

unread,
Nov 7, 1989, 9:55:58 AM11/7/89
to
A possible solution to deteriming timing for a program in a system
with minimal tools is to code your own timing procedure insert calls
to it at various places in the code. If the timing routine is very
short, it may have little effect on the measured times, or simply
ignore the overhead and base conclusions on relative timings. Worst,
case one will have to subtract a fixed overhead for time measurement.
It is also possible for the dynamics of the program to change due to
instrumentation overhead.
0 new messages