Principle of OOD #8, The Stable Dependent Principle

3 views
Skip to first unread message

Robert Martin

unread,
Aug 21, 1995, 3:00:00 AM8/21/95
to
This principle is a rule which constrains the topology of a category
hierarchy. It demonstrates that the structure of that topology is
significant to the maintainability of the application. It also
supplies a metric by which that topology can be measured.

--------------------------------------------------------------------
8. Dependencies between released categories must run in the
direction of stability. The dependee must be more stable than
the depender.
--------------------------------------------------------------------

One could view this as a axiom, rather than a principle, since it
is impossible for a category to be more stable than the categories
that it depends upon. When a category changes it always affects the
dependent categories (even if for nothing more than a
retest/revalidation). However the principle is meant as a guide to
designers. Never cause a category to depend upon less stable
categories.

What is stability? The probable change rate. A category that is
likely to undergo frequent changes is instable. A category that will
change infrequently, if at all, is stable.

There is an indirect method for measuring stability. It employs the
axiomatic nature of this principle. Stability can be measured as a
ratio of the couplings to classes outside the category.

A category which many other categories depend upon is inherently
stable. The reason is that such a category is difficult to change.
Changing it causes all the dependent categories to change.

On the other hand, a category which depends on many other categories
is instable, since it must be changed whenever any of the categories
it depends upon change.

A category which has many dependents, but no dependees is ultimately
stable since it has lots of reason not to change and no reason to
change. (This ignores the categories intrinsic need to change based
upon bugs and feature drift).

A category that depends upon many categories but has no dependents is
ultimately instable since it has no reason not to change, and is
subject to all the changes coming from the categories it depends upon.

So this notion of stability is positional rather than absolute. It
measures stability in terms of a category's position in the dependency
hierarchy. It says nothing about the subjective reasons that a
category might need changing, and focuses only upon the objective,
physical reasons that facilitate or constrain changes.

To calculate the Instability of a category (I) count the number of
classes, outside of the category, that depend upon classes within the
category. Call this number Ca. Now count the number of classes
outside the category that classes within the category depend upon.
Call this number Ce. I = Ce / (Ca + Ce). This metric ranges from 0
to 1, where 0 is ultimately stable, and 1 is ultimately instable.

In a dependency hierarchy, wherever a category with a low I value
depends upon a category with a high I value, the dependent category
will be subject to the higher rate of change of the category that it
depends upon. That is, the category with the high I metric acts as a
collector for all the changes below it, and funnels those changes up
to the category with the low I metric.

Said another way, a low I metric indicates that there are many
relatively many dependents. We don't want these dependents to be
subject to high rates of change. Thus, if at all possible, categories
should be arranged such that the categories with high I metrics should
depend upon the categories with low I metrics.


--
Robert Martin | Design Consulting | Training courses offered:
Object Mentor Assoc.| rma...@oma.com | OOA/D, C++, Advanced OO
2080 Cranbrook Rd. | Tel: (708) 918-1004 | Mgt. Overview of OOT
Green Oaks IL 60048 | Fax: (708) 918-1023 | Development Contracts.

Jerry Fitzpatrick

unread,
Aug 21, 1995, 3:00:00 AM8/21/95
to
In <1995Aug21....@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes (with edits):

>--------------------------------------------------------------------
> 8. Dependencies between released categories must run in the
> direction of stability. The dependee must be more stable than
> the depender.
>--------------------------------------------------------------------
>

>What is stability? The probable change rate. A category that is
>likely to undergo frequent changes is instable. A category that will
>change infrequently, if at all, is stable.
>
>There is an indirect method for measuring stability. It employs the
>axiomatic nature of this principle. Stability can be measured as a
>ratio of the couplings to classes outside the category.
>
>A category which many other categories depend upon is inherently
>stable. The reason is that such a category is difficult to change.
>Changing it causes all the dependent categories to change.
>
>On the other hand, a category which depends on many other categories
>is instable, since it must be changed whenever any of the categories
>it depends upon change.

The notion that stability is inversely proportional to degree of
coupling has a lot of intuitive appeal. However, I'm not convinced that
it's correct.

This idea is similar to reliability analysis. Let's say you have a
component that is composed of four sub-components. Each sub-component
has a reliability factor that ranges from 0 - 1, with 1 being complete
reliability. The overall reliability of the component is then:

R = R1 * R2 * R3 * R4

where R1 = reliability factor of component 1, etc.

Intuitively, it seems that the greater the number of sub-components,
the less reliable the component is apt to be. Unfortunately, this logic
gets us into trouble quickly.

Clearly, if ideal sub-components are used (rel factor = 1), any number
of them may be used to construct an ideal component. The point is that
the reliabilty depends not only on the number of sub-components but the
reliability of each. Any metric which includes only the number of
sub-components is very flawed unless there is a great degree of
homogeneity among the components.

Likewise, the stability of a class category or assemblage can be
determined by the equation:

S = S1 * S2 * S3 * S4 ...

but unless our metric includes the stability of each sub-component
(class), it fails to measure anything significant.

The situation is perhaps best described by the old adage "a chain is
only as good as its weakest link". The strength of the chain has only a
marginal relationship to the number of links.

--
Jerry Fitzpatrick Assessment, Training & Mentoring
Red Mountain Corporation for Software Architecture and
1795 N. Fry Rd, Suite 329 Development Processes (inc. OOA/OOD)
Katy Texas USA 77449 Phone/Fax: 713-578-8174

Bob Jacobsen

unread,
Aug 22, 1995, 3:00:00 AM8/22/95
to
In article <41ag46$o...@ixnews3.ix.netcom.com>, red...@ix.netcom.com (Jerry
Fitzpatrick) wrote:

>
> ...


>
> Likewise, the stability of a class category or assemblage can be
> determined by the equation:
>
> S = S1 * S2 * S3 * S4 ...
>
> but unless our metric includes the stability of each sub-component
> (class), it fails to measure anything significant.
>

But its not just this product that is needed ...

You can in general just keep expanding your calculation for S of some
category - its the product of the S's of each sub-category. Each of those
can be replaced by the product of the S's of it's sub-categories, etc.
Without loops, this terminates and can be calculated from the S's of the
"leaf" categories.

This implies that the leaf nodes stability and the overall topology is all
that matters. But intuitively it seems possible for some 'shielding' to
be going on:

A o- B o- C

where A formally depends on B, which formally depends on C. Formally A
depends on C, but it could be that either
1) Just about any change to C propagates through B to A
or
2) No change to C makes any difference to A
(or in between - examples left to the reader).

Taking this into account is likely to take some type of engineering
judgement of sheilding, which is not necessarily bad; it just gets us away
from the idea of a metric.

All this points out another reason to avoid cycles in the dependency chain
- they make this kind of calculation meaningless. With loops, you end up
with a large set of coupled equations. They may or may not have a unique
solution, depending on topology, but their results are pretty certain to
be non-intuitive.
--
Bob Jacobsen, (Bob_Ja...@lbl.gov, 510-486-7355, fax 510-486-5101)

Jerry Fitzpatrick

unread,
Aug 22, 1995, 3:00:00 AM8/22/95
to
In <Bob_Jacobsen-2...@131.243.214.119> Bob_Ja...@lbl.gov

(Bob Jacobsen) writes:
>
>In article <41ag46$o...@ixnews3.ix.netcom.com>, red...@ix.netcom.com
>(Jerry Fitzpatrick) wrote:

>> Likewise, the stability of a class category or assemblage can be
>> determined by the equation:
>>
>> S = S1 * S2 * S3 * S4 ...
>>
>> but unless our metric includes the stability of each sub-component
>> (class), it fails to measure anything significant.
>>
>
>But its not just this product that is needed ...
>
>You can in general just keep expanding your calculation for S of some
>category - its the product of the S's of each sub-category. Each of
>those can be replaced by the product of the S's of it's
>sub-categories, etc. Without loops, this terminates and can be
>calculated from the S's of the "leaf" categories.

I think you're saying that we have a tree structure rather than the
linear structure implied by my equation. If so, I agree completely, and
didn't intend to imply otherwise.

>This implies that the leaf nodes stability and the overall topology is
>all that matters. But intuitively it seems possible for some
>'shielding' to be going on:
>
> A o- B o- C
>
>where A formally depends on B, which formally depends on C. Formally A
>depends on C, but it could be that either
>1) Just about any change to C propagates through B to A
>or
>2) No change to C makes any difference to A
>(or in between - examples left to the reader).

If C makes no difference to A, then it wouldn't be part of the tree,
right? I think the "shielding" you're suggesting is more like
"pruning". A component which has no effect on another has zero
coupling, and is not related to the other with respect to stability.

Another type of "shielding" effect, however, is implicit in terms of
magnitude. That is, you can prune leaves off the tree which have a
stability (or reliability) factor sufficiently close to one. For
example, the stability of one atom or molecule is unimportant to the
macroscopic behavior of an object.

>Taking this into account is likely to take some type of engineering
>judgement of sheilding, which is not necessarily bad; it just gets us
>away from the idea of a metric.
>
>All this points out another reason to avoid cycles in the dependency

>chain- they make this kind of calculation meaningless. With loops,


>you end up with a large set of coupled equations. They may or may not
>have a unique solution, depending on topology, but their results are
>pretty certain to be non-intuitive.

I'm not sure I completely understand your point here.

Patrick D. Logan

unread,
Aug 22, 1995, 3:00:00 AM8/22/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) wrote:

>The situation is perhaps best described by the old adage "a chain is
>only as good as its weakest link". The strength of the chain has only a
>marginal relationship to the number of links.

I don't necessarily agree with this analogy for software:

If a component only has one "link", no matter how bad that link is,
it may still be easier to fix than another component that has five
less severe links.

The number of dependents gives *some* indication to the magnitude of the
effort required to make a change.

Other measures can give an indication of the "severity" of any of the
specific dependencies, true.

--
mailto:Patrick...@ccm.jf.intel.com
Intel/Personal Conferencing Division
(503) 264-9309, FAX: (503) 264-3375

"Poor design is a major culprit in the software crisis...
..Beyond the tenets of structured programming, few accepted...
standards stipulate what software systems should be like [in] detail..."
-Bruce W. Weide, IEEE Computer, August 1995

Bob Jacobsen

unread,
Aug 22, 1995, 3:00:00 AM8/22/95
to
In article <41d1lm$n...@ixnews7.ix.netcom.com>, red...@ix.netcom.com (Jerry
Fitzpatrick) wrote:

> >In article <41ag46$o...@ixnews3.ix.netcom.com>, red...@ix.netcom.com
> >(Jerry Fitzpatrick) wrote:

...

> >> Likewise, the stability of a class category or assemblage can be
> >> determined by the equation:

...


> >> S = S1 * S2 * S3 * S4 ...

...
> >You can in general just keep expanding your calculation for S of some
> >category - its the product of the S's of each sub-category. Each of
> >those can be replaced by the product of the S's of it's
> >sub-categories, etc. Without loops, this terminates and can be
> >calculated from the S's of the "leaf" categories.
>
> I think you're saying that we have a tree structure rather than the
> linear structure implied by my equation.

More than that. If you have classes A, B and C below, then S(A) = S(B)
from your formula, S(B) = S(C), and therefore S(A) = S(C). WIth a tree,
you get extra factors, but the net result is that you always end up with a
product of the leaves stability. If these are 1 (as I think Robert
implied), the result it always 1.

> >This implies that the leaf nodes stability and the overall topology is
> >all that matters. But intuitively it seems possible for some
> >'shielding' to be going on:

...


> > A o- B o- C

...


> >where A formally depends on B, which formally depends on C. Formally A
> >depends on C, but it could be that either
> >1) Just about any change to C propagates through B to A
> >or
> >2) No change to C makes any difference to A
> >(or in between - examples left to the reader).

...


> If C makes no difference to A, then it wouldn't be part of the tree,
> right? I think the "shielding" you're suggesting is more like
> "pruning". A component which has no effect on another has zero
> coupling, and is not related to the other with respect to stability.

Depends on the category design. B could use a class from C by returning a
result from it to A, so that A directly depends on the behavior of a
member in C. This is a pretty close coupling. Of C could just be a
container class used internally by B and never in the slightest visible
to. In this case, A still depends on B (for some other reason), but B's
dependence on C is irrelevant to A and shouldn't be held against it. Your
proposed metric does could C against A's stability, and I think that's not
so correct in some cases.

> >All this points out another reason to avoid cycles in the dependency
> >chain- they make this kind of calculation meaningless. With loops,
> >you end up with a large set of coupled equations. They may or may not
> >have a unique solution, depending on topology, but their results are
> >pretty certain to be non-intuitive.
> I'm not sure I completely understand your point here.

If A depends on B, which depends on C, which depends on A, S(A) = S(B)
from your formula, S(B) = S(C), S(C) = S(A) and therefore the only
solution of the equations is 'degenerate', becase the only info you have
is S(A) = S(A). S(A) could be anything from 0 to 1, because there is no
constraint on it.


I think "stability" is something too complicated for these simple metrics
- maybe we should rename something. And I'm not sure I agree that your
proposed transitive metric is an improvement.

Bob Jacobsen

unread,
Aug 22, 1995, 3:00:00 AM8/22/95
to
In article <1995Aug23.0...@rcmcon.com>, rma...@rcmcon.com (Robert
Martin) wrote:

> Bob_Ja...@lbl.gov (Bob Jacobsen) writes:
>
> >This implies that the leaf nodes stability and the overall topology is all
> >that matters. But intuitively it seems possible for some 'shielding' to
> >be going on:
>

> > A o- B o- C
>

> >where A formally depends on B, which formally depends on C. Formally A
> >depends on C, but it could be that either
> >1) Just about any change to C propagates through B to A
> >or
> >2) No change to C makes any difference to A
> >(or in between - examples left to the reader).

The above comment was in response to Jerry Fitzpatrick's notion that that
stability of a category is a function of the stability of all the classes
it depends on. Mathematically (if not in practice), this makes it a
transitive relation as Robert points out elsewhere. But _if_ you want
this transitivity, _then_ you need to cope with whether instability really
"passes through" a category. I think it doesn't always, from which I
conclude that Jerry's proposed replacement/extension to the I metric is
not a clear win. Robert later seems to rephrase the debate:

> In case 2) above, if A is a realeased component with a version number,
> that component will probably need revalidation when C changes, even if
> nothing in A changes. Thus, even if the "sheilding" you are talking
> about exists, it may not protect A from being affected by changes to
> C.

Category A will have to be revalidated when C changes, but A may or may
not have to have any changes made. The definition in the original
principle #8 was

: What is stability? The probable change rate. A category

: that is likely to undergo frequent changes is instable. A
: category that will change infrequently, if at all, is stable.

If "stability" is linked to "change", it carries a more intuitive meaning
IMHO than if its linked to "number of other things that might cause
revalidation".

The I metric really counts "number of other things that might cause
revalidation" and "number of revalidations that a change will cause".
Perhaps another name would be more intuitive than "stability"? As a
physicist, "force" and "inertia" (mass) spring to mind, but that implies
that I should be called "acceleration", which doesn't carry the right
meaning. "Connectivity ratio"? "Buy/sell ratio"? Gross margin?

"exposure"?

I really dont have a flawless proposal for a better phrase than
"stability", but that's the beauty of c.object - maybe somebody who's
better with words can find the perfect name.

Bob

Robert Martin

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>In <1995Aug21....@rcmcon.com> rma...@rcmcon.com (Robert
>Martin) writes (with edits):

>>--------------------------------------------------------------------
>> 8. Dependencies between released categories must run in the
>> direction of stability. The dependee must be more stable than
>> the depender.
>>--------------------------------------------------------------------

>>What is stability? The probable change rate. A category that is
>>likely to undergo frequent changes is instable. A category that will
>>change infrequently, if at all, is stable.
>>

>The notion that stability is inversely proportional to degree of


>coupling has a lot of intuitive appeal. However, I'm not convinced that
>it's correct.

[snip]

>the stability of a class category or assemblage can be
>determined by the equation:

> S = S1 * S2 * S3 * S4 ...

>but unless our metric includes the stability of each sub-component


>(class), it fails to measure anything significant.

Stability is the inverse of "propensity for change". Now a software
module can be instable for many reasons. It may just be the kind of
module that changes alot because it is intrinsically variable (e.g. it
may be part of a feature set that is constantly changing). Or, it may
change alot because of external factors in its environment, (e.g. the
things that it depends upon are changing).

The internal factors that cause a module to change are difficult to
quantify. But the external factors are not. It is these external
factors that I am attempting to measure in the context of this
principle. It is not possible for me to measure the absolute
stability of a module, but it is possible for me to determine which
*other* modules will be affected if any particular module changes.

Principle #8 is an attempt to make sure that modules which exhibit
external stability do not depend upon modules that exhibit external
instability.

Does this completely address the stability issue? No, certainly not.
But it does address some of it.

Robert Martin

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
Bob_Ja...@lbl.gov (Bob Jacobsen) writes:

>This implies that the leaf nodes stability and the overall topology is all
>that matters. But intuitively it seems possible for some 'shielding' to
>be going on:

> A o- B o- C

>where A formally depends on B, which formally depends on C. Formally A
>depends on C, but it could be that either
>1) Just about any change to C propagates through B to A
>or
>2) No change to C makes any difference to A
>(or in between - examples left to the reader).

In case 2) above, if A is a realeased component with a version number,


that component will probably need revalidation when C changes, even if
nothing in A changes. Thus, even if the "sheilding" you are talking
about exists, it may not protect A from being affected by changes to
C.

--

Robert Martin

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>Another type of "shielding" effect, however, is implicit in terms of
>magnitude. That is, you can prune leaves off the tree which have a
>stability (or reliability) factor sufficiently close to one. For
>example, the stability of one atom or molecule is unimportant to the
>macroscopic behavior of an object.

When I began studying stability, I tried to incorporate
"transitivity". i.e. the stability of a module was calculated from
the stability of all the modules that it depends upon. However, I
quickly found that this doesn't work well. The leaves of the tree
*always* have a positional instability of 0. i.e. they are completely
stable because they do not depend upon anybody else. If the leaves
are stable then all the modules that depend upon the leaves are
stable, etc. And the stability of the leaves propogates upwards to
the root. Clearly this is not very useful.

So I satisfied myself that positional stability is not transitive. It
is an attribute of the module's *position* in the dependency
hierarchy, and has nothing whatever to do with the intrinsic stability
of any other modules.

The 'I' metric, calculated as Ce/(Ca+Ce), is this measurement of
positional stabillity. It does not say anything at all about
instrinsic stability (which can only really be measured by
accumulating a change history over time).

Jerry Fitzpatrick

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In <Bob_Jacobsen-2...@tractor.lbl.gov> Bob_Ja...@lbl.gov
(Bob Jacobsen) writes (with edits):

>The above comment was in response to Jerry Fitzpatrick's notion that

>that stability of a category is a function of the stability of all the


>classes it depends on. Mathematically (if not in practice), this
>makes it a transitive relation as Robert points out elsewhere. But
>_if_ you want this transitivity, _then_ you need to cope with whether
>instability really "passes through" a category. I think it doesn't
>always, from which I conclude that Jerry's proposed
>replacement/extension to the I metric is not a clear win. Robert
>later seems to rephrase the debate:

FWIW, I'm not really trying to address Principle #8 directly. However,
I think the proposed metric has problems. If it's the only basis on
which Principle #8 rests, then I have doubts about the validity of the
principle.

>The I metric really counts "number of other things that might cause
>revalidation" and "number of revalidations that a change will cause".
>Perhaps another name would be more intuitive than "stability"?

Yes, I agree that this is precisely what is measured. This seems like a
very dilute measurement to me, and therefore I question its value.

In design, I don't care very much about enumerating the things that
*might* happen; I care more about what's *likely* to happen. I
understand that it's very difficult (impossible?) to quantify the
likelihood of change for a module, but I don't think you can make it go
away by ignoring it.

>I really dont have a flawless proposal for a better phrase than
>"stability", but that's the beauty of c.object - maybe somebody who's
>better with words can find the perfect name.

Stability metrics have already been proposed. For anyone who's
interested, here is a reference:

"Design Stability Measures for Software Maintenance"
S. Yau and J. Collofello
IEEE Transactions on Software Engineering
Volume SE-11, September 1985, pp. 849-856

The terms 'robustness' or 'complexity' might be preferable to
'stability' (just trying them on for size). Really, though, do we need
another term at all? Isn't this simply another coupling metric?

Jerry Fitzpatrick

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In <1995Aug23.0...@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:

>>>--------------------------------------------------------------------
>>> 8. Dependencies between released categories must run in the
>>> direction of stability. The dependee must be more stable than
>>> the depender.
>>>--------------------------------------------------------------------

>Stability is the inverse of "propensity for change". Now a software


>module can be instable for many reasons. It may just be the kind of
>module that changes alot because it is intrinsically variable (e.g. it
>may be part of a feature set that is constantly changing). Or, it may
>change alot because of external factors in its environment, (e.g. the
>things that it depends upon are changing).

I'm not sure I understand your distinction between intrinsic and
extrinsic stability.

>The internal factors that cause a module to change are difficult to
>quantify. But the external factors are not. It is these external
>factors that I am attempting to measure in the context of this
>principle. It is not possible for me to measure the absolute
>stability of a module, but it is possible for me to determine which
>*other* modules will be affected if any particular module changes.

Ditto. Why are the external factors easier to quantify than the
internal factors?

I certainly agree that you can determine which modules are affected by
a change in another. This is a coupling metric.

>Principle #8 is an attempt to make sure that modules which exhibit
>external stability do not depend upon modules that exhibit external
>instability.

I don't get it.

>Does this completely address the stability issue? No, certainly not.
>But it does address some of it.

"Complete" and "absolute" are pretty tough to attain, and I wouldn't
expect that.

I'm concerned that by formulating a metric that is essentially an
average of inter-module coupling, it gives the misleading impression
that all modules have an equal, non-deterministic stability factor.
This, of course, is completely false and could lead someone far afield
in their design.

In general, we have to be very careful about applying any metric. We're
all comforted by the apparent empiricism of the metric, but its use as
a heuristic can lead to problems.

For example, it's been found that modules outside the range of 10-100
statements have more bugs. This has led some companies to demand that
modules never be larger than 100 LOC. Unfortunately, this can promote
the breakup of a cohesive module into more-or-less arbitrary
sub-modules. Clearly this can lead to more bugs, not fewer.

I'm not suggesting that a stability or coupling metric is
inappropriate. I just think it has to be mathematically sound,
well-explained (to avoid confusion), and provide a useful, reliable
result.

Jerry Fitzpatrick

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In <1995Aug23.0...@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:

>When I began studying stability, I tried to incorporate
>"transitivity". i.e. the stability of a module was calculated from
>the stability of all the modules that it depends upon. However, I
>quickly found that this doesn't work well. The leaves of the tree
>*always* have a positional instability of 0. i.e. they are completely
>stable because they do not depend upon anybody else. If the leaves
>are stable then all the modules that depend upon the leaves are
>stable, etc. And the stability of the leaves propogates upwards to
>the root. Clearly this is not very useful.

My point exactly.

>So I satisfied myself that positional stability is not transitive. It
>is an attribute of the module's *position* in the dependency
>hierarchy, and has nothing whatever to do with the intrinsic stability
>of any other modules.

No, this I don't agree with. You're saying that overall stability
depends only on topology. I believe that it depends both on topology
and on intrinsic stability. Using topology alone is like trying to
guess the weight of a block from its dimensions, without any knowledge
of its composition.

>The 'I' metric, calculated as Ce/(Ca+Ce), is this measurement of
>positional stabillity. It does not say anything at all about
>instrinsic stability (which can only really be measured by
>accumulating a change history over time).

Well, Ce/(Ca+Ce) is certainly a metric. Unfortunately, it treats all
positions equally, leading to a linear relationship. This might provide
an understanding of the coupling, but without incorporating the
instrinsic stability factors, I don't think it tells you anything about
the macroscopic stability.

Jerry Fitzpatrick

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In <41dg8q$v...@ornews.intel.com> "Patrick D. Logan"

<patrick...@ccm.jf.intel.com> writes:
>
>red...@ix.netcom.com (Jerry Fitzpatrick) wrote:
>
>>The situation is perhaps best described by the old adage "a chain is
>>only as good as its weakest link". The strength of the chain has only
>>a marginal relationship to the number of links.
>
>I don't necessarily agree with this analogy for software:
>
>If a component only has one "link", no matter how bad that link is,
>it may still be easier to fix than another component that has five
>less severe links.
>
>The number of dependents gives *some* indication to the magnitude of
>the effort required to make a change.
>
>Other measures can give an indication of the "severity" of any of the
>specific dependencies, true.

I had second thoughts about the analogy after posting the message. It
really doesn't demonstrate the point very accurately.

As you suggest, the number of dependents (chain links) is related to
the overall reliability of the chain. This, however, is because the
links are homogeneous (that is, they presumably have the same size,
material, and and manufacturing process).

A program is not composed of homogeneous modules. It is a mistake to
think that a valid metric can be created for heterogeneous compositions
when details of the components are not specified. Dr. Deming's "Parable
of the Red Beads" provides a good example of how logic and intuition
fails us under these circumstances.

Jerry Fitzpatrick

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In <Bob_Jacobsen-2...@tractor.lbl.gov> Bob_Ja...@lbl.gov
(Bob Jacobsen) writes:

>If A depends on B, which depends on C, which depends on A, S(A) = S(B)
>from your formula, S(B) = S(C), S(C) = S(A) and therefore the only
>solution of the equations is 'degenerate', becase the only info you
>have is S(A) = S(A). S(A) could be anything from 0 to 1, because
>there is no constraint on it.

Yikes! This is really a recursive relationship isn't it? Recursion
certainly complicates any analysis.

>I think "stability" is something too complicated for these simple

>metrics - maybe we should rename something. And I'm not sure I agree


>that your proposed transitive metric is an improvement.

I'm not really proposing an alternative.

Patrick D. Logan

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
Bob_Ja...@lbl.gov (Bob Jacobsen) wrote:

>But _if_ you want
>this transitivity, _then_ you need to cope with whether instability really

>"passes through" a category. I think it doesn't always...

Instability doesn't always pass through. A category may have B and C.
D may use only B. C changes. Then does D have to be re-tested?

Maybe. Maybe not. But to be sure, you better be pessemistic.

The definition of "stability" in RCM's principle is fast, loose, and
glossy. But it may be enough to be effective. Metrics should be simple
yet meaningful in order to get the most bang for just a little bucks.

Tim Dugan

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In article <1995Aug23.0...@rcmcon.com>,

Robert Martin <rma...@rcmcon.com> wrote:
>red...@ix.netcom.com (Jerry Fitzpatrick) writes:
>
>>In <1995Aug21....@rcmcon.com> rma...@rcmcon.com (Robert
>>Martin) writes (with edits):
>
>>>--------------------------------------------------------------------
>>> 8. Dependencies between released categories must run in the
>>> direction of stability. The dependee must be more stable than
>>> the depender.
>>>--------------------------------------------------------------------
>>>What is stability? The probable change rate. [...]

>
>[snip]
>
>>the stability of a class category or assemblage can be
>>determined by the equation:
>
>> S = S1 * S2 * S3 * S4 ...
>
>>but unless our metric includes the stability of each sub-component
>>(class), it fails to measure anything significant.
>
>Stability is the inverse of "propensity for change". Now a software
>module can be instable for many reasons. [...]

>
>Does this completely address the stability issue? No, certainly not.
>But it does address some of it.

Well, may be this is superflous information to your point, but it
seems to me that a module has more than one type of "stability."

The key one here, I believe, is

1) stability of interface

Those modules that depend on another *should* depend on it at the
interface level. (Of course, that is not always the case. For example,
C++ *friends* can "cheat"...)

The interface has a few aspects of stability:

1a) backwards compatibility - how much the calls/types
remain the same.
1b) extension - extensions to an interface that don't affect
the existing part of the interface.

2) stability of implementation

Likewise, the implementation might change somewhat independently of
the interface.

2a) backwards compatibility - how much the behavior
remains the same.
2b) extension - implementation of the extensions

Very often, behavior of products changes even though the apparent
interface has not...case in point: bug fixes.

3) stability according to viewpoint

Another aspect of inter-module stability to consider is this: The
interface between A and B may not be the same as between A and C.
Case in point: in C++, A is superclass of B, A is used by C.
B has access to "protected" elements of A. C does not. These
are different "Views" of A. One view may be more stable than
another.

4) stability according to "state"

Another aspect is stability across states of an object (analogous
to solid, liquid, gas, etc.). For example, can persistent data
created by one version still be used by another? Can a version
of one sent across a network be comprehended by another? etc.

-t
--

Which came first? The chicken or the void *?


Patrick D. Logan

unread,
Aug 24, 1995, 3:00:00 AM8/24/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) wrote:

>A program is not composed of homogeneous modules. It is a mistake to
>think that a valid metric can be created for heterogeneous compositions
>when details of the components are not specified. Dr. Deming's "Parable
>of the Red Beads" provides a good example of how logic and intuition
>fails us under these circumstances.

While I agree with this 100 percent, I also believe that a few basic
measures that are easy to gather can be used as "indications" to get
the most bang for the buck.

Jerry Fitzpatrick

unread,
Aug 24, 1995, 3:00:00 AM8/24/95
to
In <41fonv$o...@Starbase.NeoSoft.COM> ti...@Starbase.NeoSoft.COM (Tim
Dugan) writes:

>Well, may be this is superflous information to your point, but it
>seems to me that a module has more than one type of "stability."

>...

Well, the term "stability" could have the variety of meanings you
suggest. I believe Bob Jacobson suggested this as well. I think we need
to use the term very carefully.

To me, a software metric that uses coupling in its equation is a
coupling metric. Nevertheless, if we choose to call it a "stability"
metric, then it behooves us to make very clear what kind of stability
we're referring to.

Jerry Fitzpatrick

unread,
Aug 24, 1995, 3:00:00 AM8/24/95
to
In <41g4o1$3...@ornews.intel.com> "Patrick D. Logan"
<patrick...@ccm.jf.intel.com> writes:

>The definition of "stability" in RCM's principle is fast, loose, and
>glossy. But it may be enough to be effective. Metrics should be simple

>yet meaningful in order to get the most bang for just a little bucks.
^^^^^^^^^^

There are other metrics criteria, but "meaningful" is certainly the
most important. Although metrics are somewhat controversial, it is
usually the case that simplicity and value are inversely proportional.

Jerry Fitzpatrick

unread,
Aug 24, 1995, 3:00:00 AM8/24/95
to
In <41i3qr$m...@ornews.intel.com> "Patrick D. Logan"
<patrick...@ccm.jf.intel.com> writes:

>While I agree with this 100 percent, I also believe that a few basic

>measures that are easy to gather can be used as "indications" to get
>the most bang for the buck.

Indicators are good. After all, some information is usually better
than none.

The potential problem is that equations often fool people into thinking
that they have rock-solid data, not just an indication. That's why it's
so important to clarify the use *and misuse* of the metric.

Herb Sutter

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In article <41j1ig$8...@ixnews3.ix.netcom.com>,

red...@ix.netcom.com (Jerry Fitzpatrick) wrote:
>In <41i3qr$m...@ornews.intel.com> "Patrick D. Logan"
><patrick...@ccm.jf.intel.com> writes:
>
>>While I agree with this 100 percent, I also believe that a few basic
>>measures that are easy to gather can be used as "indications" to get
>>the most bang for the buck.
>
>Indicators are good. After all, some information is usually better
>than none.
>
>The potential problem is that equations often fool people into thinking
>that they have rock-solid data, not just an indication. That's why it's
>so important to clarify the use *and misuse* of the metric.

Agreed. One of the very few complaints I have about Robert's first book is
that the metrics are usually given to two significant digits (e.g., 0.88,
0.73). Given the many (if reasonable) assumptions behind the metrics, and the
fact that they act more as indicators than as traditional exact measurements
in terms of design meaning (Robert himself notes categories that are
exceptions and whose raw metrics may not be as meaningful as most of the
others'), the extra precision can be misleading. It's like announcing a
pre-election poll showing Candidate A running at 45.61% of voter support, with
a margin of error of 6%, 19 times out of 20... that ".61" tacked on the end is
a meaningless precision because it goes beyond the number's reliable accuracy
in the first place. Error (in assumptions, in methods, in sample sizes, etc.)
swamps the claimed precision; it's Heisenberg in the large.

I suppose this isn't really much of a complaint after all, since the
alternative (showing just one significant digit) may have made readers wonder
about the rounding, so it's reasonable for him to do what he did... still,
what significant statement can you make about two class categories with a
particular metric value of 0.82 and 0.88, respectively? Probably even Robert
might agree that the answer is likely "not much". Maybe explaining this kind
of thing would be worth a footnote in the next book.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Herb Sutter 2228 Urwin, Ste 102 voice (416) 618-0184
Connected Object Solutions Oakville ON Canada L6L 2T2 fax (905) 847-6019

Robert Martin

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>In <1995Aug23.0...@rcmcon.com> rma...@rcmcon.com (Robert
>Martin) writes:

>>>>--------------------------------------------------------------------
>>>> 8. Dependencies between released categories must run in the
>>>> direction of stability. The dependee must be more stable than
>>>> the depender.
>>>>--------------------------------------------------------------------

>>Stability is the inverse of "propensity for change". Now a software


>>module can be instable for many reasons. It may just be the kind of
>>module that changes alot because it is intrinsically variable (e.g. it
>>may be part of a feature set that is constantly changing). Or, it may
>>change alot because of external factors in its environment, (e.g. the
>>things that it depends upon are changing).

>I'm not sure I understand your distinction between intrinsic and
>extrinsic stability.

Consider two modules, A depends upon B. A works. But B keeps on
changing becaus the customer can't decide about B's feature set.
Since B keeps changing, A is forced to change along with it. B is
intrinsically instable. A is extrinsically instable. The only reason
A changes is because of B.

So, extrinsic, or positional, instability relates to the dependencies
between modules.

Robert Martin

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>rmartin said:
>>The internal factors that cause a module to change are difficult to
>>quantify. But the external factors are not. It is these external
>>factors that I am attempting to measure in the context of this
>>principle. It is not possible for me to measure the absolute
>>stability of a module, but it is possible for me to determine which
>>*other* modules will be affected if any particular module changes.

>Ditto. Why are the external factors easier to quantify than the
>internal factors?

Because, by definition, they are based upon intermodule dependencies;
and those dependencies can be unambiguously quantified. (in C++ they
are #include statements).

>>Principle #8 is an attempt to make sure that modules which exhibit
>>external stability do not depend upon modules that exhibit external
>>instability.

>I don't get it.

Consider that we have an abstract class. We put this abstract class
into module A. Module D contains many derivatives of that abstract
class, so D depends upon A. The more derivatives that exist in D, the
stronger the dependency of D upon A is. (the Ca metric for A is
higher).

The more derivatives there are in D, the harder it is to change the
abstract class in A since such a change would force all of the classes
in D to change too. So the strength the dependency from D to A makes
A more stable.

But, what if A depended upon another module M which contained concrete
classes that required constant changing (because the user was unsure
of the feature set). Then A would need to change with M, and D would
change with A. Thus M would feel backpressure from D not to change.

I am sure you have seen this kind of situation. You really need to
make a certain change, but you daren't because you would affect 25
other modules. This is what happens when highly responsible modules
(like A) depend upon modules that must change alot.

So, modules with high positional stability (like A) should not depend
upon modules that have low positional stability (like M). Otherwise
it will be very difficult to change M.

>I'm concerned that by formulating a metric that is essentially an
>average of inter-module coupling, it gives the misleading impression
>that all modules have an equal, non-deterministic stability factor.
>This, of course, is completely false and could lead someone far afield
>in their design.

Agreed. However, the intrinsic stability of a module must match its
positional stability. In the example above, the module M was
intrinsically instable, but was placed in a position of responsibility
(i.e. A depended upon it) such that the necessary changes were
difficult to make.

Actually, we are still working in the dark a bit because principle #9
will better address the relationship between instrinsic stability and
positional stability. But even in the absence of that definition it
should be clear that if the positional stability of a module does not
match its intrinsic stability, there will be problems.

>In general, we have to be very careful about applying any metric. We're
>all comforted by the apparent empiricism of the metric, but its use as
>a heuristic can lead to problems.

>For example, it's been found that modules outside the range of 10-100
>statements have more bugs. This has led some companies to demand that
>modules never be larger than 100 LOC. Unfortunately, this can promote
>the breakup of a cohesive module into more-or-less arbitrary
>sub-modules. Clearly this can lead to more bugs, not fewer.

True enough. And there are always exceptions to every rule. However,
I think that it is always wise for an engineer to design his software
in such a way that the modules that are intrinsically variable are
able to change with minimum effect upon the rest of the software.
i.e. don't allow stable modules to depend upon instable modules.

>I'm not suggesting that a stability or coupling metric is
>inappropriate. I just think it has to be mathematically sound,
>well-explained (to avoid confusion), and provide a useful, reliable
>result.

Does the above help?

Robert Martin

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>In <1995Aug23.0...@rcmcon.com> rma...@rcmcon.com (Robert
>Martin) writes:

>>So I satisfied myself that positional stability is not transitive. It
>>is an attribute of the module's *position* in the dependency
>>hierarchy, and has nothing whatever to do with the intrinsic stability
>>of any other modules.

>No, this I don't agree with. You're saying that overall stability
>depends only on topology.

Yes, consider the following reasoning. If a module is topologically
stable, then it will be stable overall because too many other modules
will be affected by changes to that module. Thus, even though you
want to make changes to the module you will be prevented by the
inordinate cost of adjusting all the dependent modules. On the other hand a
topologically instable module is free to change without affecting
other modules. If such a module happens to be instrinsically stable,
no harm is done. So the overall a module cannot be less stable than
its position dictates. i.e. overall instability can be no greater
than positional instability. Position constrains change. Overall
*stability* can be greater than positional stability, but cannot be
less.

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In <41jcjs$k...@steel.interlog.com> he...@interlog.com (Herb Sutter)
writes:

>Agreed. One of the very few complaints I have about Robert's first
>book is that the metrics are usually given to two significant digits
>(e.g., 0.88, 0.73). Given the many (if reasonable) assumptions behind
>the metrics, and the fact that they act more as indicators than as
>traditional exact measurements in terms of design meaning (Robert
>himself notes categories that are exceptions and whose raw metrics may
>not be as meaningful as most of the others'), the extra precision can
>be misleading. It's like announcing a pre-election poll showing
>Candidate A running at 45.61% of voter support, with a margin of error
>of 6%, 19 times out of 20... that ".61" tacked on the end is a
>meaningless precision because it goes beyond the number's reliable
>accuracy in the first place. Error (in assumptions, in methods, in
>sample sizes, etc.) swamps the claimed precision; it's Heisenberg in
>the large.

Your examples here are right on target.

>I suppose this isn't really much of a complaint after all, since the
>alternative (showing just one significant digit) may have made readers
>wonder about the rounding, so it's reasonable for him to do what he
>did... still, what significant statement can you make about two class
>categories with a particular metric value of 0.82 and 0.88,
>respectively? Probably even Robert might agree that the answer is
>likely "not much". Maybe explaining this kind of thing would be worth
>a footnote in the next book.

Bob certainly isn't the first person to make this mistake. The
literature is rife with statistical misuse, sometimes intentional. Of
course, it's is *very* easy to fall into this trap by accident.

I'm sure that he has no intentions of deceiving people or promoting an
inaccurate metric. As you point out, other developer's might easily
misinterpet or misuse the information in ways he didn't intend. As you
say, this isn't really much of a complaint compared to the other good
information contained in the book.

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In <1995Aug25....@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:
>
>red...@ix.netcom.com (Jerry Fitzpatrick) writes:
>>I'm not sure I understand your distinction between intrinsic and
>>extrinsic stability.
>
>Consider two modules, A depends upon B. A works. But B keeps on
>changing becaus the customer can't decide about B's feature set.
>Since B keeps changing, A is forced to change along with it. B is
>intrinsically instable. A is extrinsically instable. The only reason
>A changes is because of B.
>
>So, extrinsic, or positional, instability relates to the dependencies
>between modules.

Thanks for the explanation. I think I understand now how you're using
the terms.

Still, I take a slightly different view of this.

To me, B is extrinsically unstable because it's change depends strictly
upon external influences (the customer). Module A is also extrinsically
unstable because it depends upon the same external influences, albeit
through module B. Since there is nothing internal to A which causes
change (in this example), I would not call it instrinsically unstable.

Of course, this is more a fine point of English than a technical point.
It's just that I'd normally think of A and B as being externally
influenced, rather than being instrinsically or extrinsically
"unstable".

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
One final note (from me) with respect to the metric presented with
Principle #8:

1) I think that coupling information is useful in design,
and don't mean to imply otherwise.

2) Nevertheless, a metric which includes only coupling
information is a coupling metric, *not* a stability metric.

3) There are at least a dozen existing coupling metrics that
are similar, if not identical, to this one. Why don't any
of these fulfill your needs?

Robert Martin

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
he...@interlog.com (Herb Sutter) writes:

>Agreed. One of the very few complaints I have about Robert's first book is
>that the metrics are usually given to two significant digits (e.g., 0.88,
>0.73).

Yes, I probably should have said something about that in the book.
The reason that I chose 2 places is that a decimal order of magnitude
is too large. The difference between .54 and .55 is not enough
to warrant the corresponding roundings of .5 and .6. By the same
token the difference between .45 and .54 is too large to warrant
naming the both .5.

Perhaps I should have used a different radix. One hexadecimal digit
after the 'heximal' point would probably have been sufficient. That
would have given me a resolution of 1 in 16 rather than one in 100.
Or perhaps 5 bits after the binary point giving me 1 in 32 resolution.
Or perhaps 2 digits after the qinary point giving me one in 25
resolution.

Ah well, there is no winning that game...

BTW, does anybody know how to count in binary... I mean, how to say
the actuall words? One of my high school teachers taught me.

0 null
1 un
10 twin
11 twoon
100 twindred
101 twindred un
110 twindred twin
111 twindred twoon
1000 twinsand (what else?)
1001 twinsand un
1100 twinsand twidred
1111 twinsand twindred twoon
10000 twin twisand
100000 twindred twinsand
1000000 twillion ;)

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In <1995Aug25....@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes (with edits):

>Consider that we have an abstract class. We put this abstract class
>into module A. Module D contains many derivatives of that abstract
>class, so D depends upon A. The more derivatives that exist in D, the
>stronger the dependency of D upon A is. (the Ca metric for A is
>higher).
>
>The more derivatives there are in D, the harder it is to change the
>abstract class in A since such a change would force all of the classes
>in D to change too. So the strength the dependency from D to A makes
>A more stable.
>

>[snip]


>
>Actually, we are still working in the dark a bit because principle #9
>will better address the relationship between instrinsic stability and
>positional stability. But even in the absence of that definition it
>should be clear that if the positional stability of a module does not
>match its intrinsic stability, there will be problems.

Your description here is lucid, and your goal seems essentially
correct. Given your previous definitions of intrinsic/extrinsic
stability, the "positional mismatch" you describe makes sense.

Let me paraphrase the idea and see if you agree: "Seperate the parts
of the code that are likely to change from the parts that are unlikely
to change." Yes?

Naturally, I have seen examples of the "strength due to dependency"
concept you describe. When you have a huge interconnected glob of code,
it can certainly make you think twice about changes. I think there's
somewhat of a double-edged sword here though.

Sometimes these software "black holes" are ill-conceived and virtually
force developers to create work-arounds. On the other hand, some have
true cohesional strength even though they're massive. Unfortunately,
these are often worked-around too, simply because the (new) developers
don't understand how to use or extend the code properly.

Still, with respect to metrics, I think the term "coupling" would be
better than the term "stability" even though your *goal* is to minimize
the effort required for changes.

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In <1995Aug25....@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:

>BTW, does anybody know how to count in binary... I mean, how to say
>the actuall words? One of my high school teachers taught me.
>
>0 null
>1 un
>10 twin
>11 twoon
>100 twindred
>101 twindred un
>110 twindred twin
>111 twindred twoon
>1000 twinsand (what else?)
>1001 twinsand un
>1100 twinsand twidred
>1111 twinsand twindred twoon
>10000 twin twisand
>100000 twindred twinsand
>1000000 twillion ;)

Yikes! Twas that before or after his lobotomy? :)

Jerry Fitzpatrick

unread,
Aug 25, 1995, 3:00:00 AM8/25/95
to
In <1995Aug25....@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:

>Yes, consider the following reasoning. If a module is topologically
>stable, then it will be stable overall because too many other modules
>will be affected by changes to that module. Thus, even though you
>want to make changes to the module you will be prevented by the
>inordinate cost of adjusting all the dependent modules. On the other
>hand a topologically instable module is free to change without
>affecting other modules. If such a module happens to be
>instrinsically stable, no harm is done. So the overall a module
>cannot be less stable than its position dictates. i.e. overall
>instability can be no greater than positional instability. Position
>constrains change. Overall *stability* can be greater than positional
>stability, but cannot be less.

Well, I agree that stability is affected by topology (coupling).
However, topology by itself does not describe stability sufficiently.

You say, "If such a module happens to be intrinsically stable, no
harm done [by decoupling the module]". While it's true that this
won't "harm" the design functionally, there is a cost involved.

You seem to be advocating that every module should be as decoupled from
others as possible. Although decoupling if often beneficial, it is not
an overriding design principle. Decoupling modules often involves added
complexity and configurability. These increase development and testing
time, and may decrease reliability and understandability. After all,
what is the point in decoupling a module which has a 0% chance of
changing?

This situation is not unique to software. Mechanical products use a
variety of coupling methods. Take, for example, the difference between
a welded connection and a cotter-pin connection. The weld offers
extremely tight coupling, and therefore enhances mechanical strength
and reliability. The cotter-pin lacks the strength, but offers ease
of replacement.

It would be foolish to weld an air filter to an engine block. It would
also be foolish to hold the engine to the chassis with cotter pins. The
choice of coupling is an engineering tradeoff based on the costs of the
coupler and need for replacement. The topology of the car's parts has
little bearing on it.

You simply cannot specify the "best" method of coupling without
knowledge of other factors.

Ell

unread,
Aug 26, 1995, 3:00:00 AM8/26/95
to
Robert Martin (rma...@rcmcon.com) wrote:

--------------------------------------------------------------------
8. Dependencies between released categories must run in the
direction of stability. The dependee must be more stable than
the depender.
--------------------------------------------------------------------

If you are saying that we should design so that the above is always true,
I think you are putting a one-sided emphasis on dependency management. I
could see where all else equal given two design alternatives, we go with
the one that achieves the above. But to ignore, or override domain
analysis in a major way to achieve your principle would be a mistake, imo.

Elliott


Scott A. Whitmire

unread,
Aug 26, 1995, 3:00:00 AM8/26/95
to
In <1995Aug23.0...@rcmcon.com>, rma...@rcmcon.com (Robert Martin) writes:
>red...@ix.netcom.com (Jerry Fitzpatrick) writes:
>
>>The notion that stability is inversely proportional to degree of
>>coupling has a lot of intuitive appeal. However, I'm not convinced that
>>it's correct.

>
>[snip]
>
>>the stability of a class category or assemblage can be
>>determined by the equation:
>
>> S = S1 * S2 * S3 * S4 ...
>
>>but unless our metric includes the stability of each sub-component
>>(class), it fails to measure anything significant.

This formula for the stability of a component is incorrect. If you remember your
probabilities, the probability of at least one of a set of events happening is the
product of the probabilities of each event ONLY if they are independent. In our
case, the stability of the component is the product of the stabilities of the
sub-components only if the sub-components are independent (that is, they don't
interact with each other, either directly or indirectly). I don't think that it
is possible for sub-components of a composite structure to be completely independent
of each other in software. If nothing else, they occupy the same name space and share
other resources.

The bottom line is that the real formula for the stability of a composite, while still
based on the stabilities of each component, is also influenced by the components
interactions with each other. This is a two-part problem, part static and part dynamic.
In other words, the stability of a composite made up of interacting components can only
be determined at run time! And they wonder why we say that software is the most
complex thing we've undertaken.

>
>Stability is the inverse of "propensity for change". Now a software
>module can be instable for many reasons. It may just be the kind of
>module that changes alot because it is intrinsically variable (e.g. it
>may be part of a feature set that is constantly changing). Or, it may
>change alot because of external factors in its environment, (e.g. the
>things that it depends upon are changing).
>

>The internal factors that cause a module to change are difficult to
>quantify. But the external factors are not. It is these external
>factors that I am attempting to measure in the context of this
>principle. It is not possible for me to measure the absolute
>stability of a module, but it is possible for me to determine which
>*other* modules will be affected if any particular module changes.
>

>Principle #8 is an attempt to make sure that modules which exhibit
>external stability do not depend upon modules that exhibit external
>instability.
>

>Does this completely address the stability issue? No, certainly not.
>But it does address some of it.
>

I suggest we give each of these ideas different names. The "external" factors
that Robert talks about can be called the category's exposure to changes from
interactional factors (these are really "internal" to the application and are
a result of its design). The "external" factors that Robert discusses are the
category's exposure to change from environmental factors (these are really
"external" to the application, and beyond the control of the design).

So, we have stability as a function of interactional (or structural) factors and
environmental factors. In order to use this design principle in an actual design,
we have to have some idea of both forms of stability, even if one is just an
ordinal classification of "not likely," "maybe," and "quite probably" (will change).

Comments?

Scott A. Whitmire sco...@advsysres.com
Advanced Systems Research
25238 127th Avenue SE tel:(206)631-7868
Kent Washington 98031 fax:(206)630-2238

Consultants in object-oriented development and software metrics.


Jerry Fitzpatrick

unread,
Aug 26, 1995, 3:00:00 AM8/26/95
to
In <nntpuserD...@netcom.com> sco...@advsysres.com (Scott A.
Whitmire) writes:

>>red...@ix.netcom.com (Jerry Fitzpatrick) writes:
>>>the stability of a class category or assemblage can be
>>>determined by the equation:
>>
>>> S = S1 * S2 * S3 * S4 ...
>>
>>>but unless our metric includes the stability of each sub-component
>>>(class), it fails to measure anything significant.
>
>This formula for the stability of a component is incorrect. If you
>remember your probabilities, the probability of at least one of a set
>of events happening is the product of the probabilities of each event
>ONLY if they are independent. In our case, the stability of the
>component is the product of the stabilities of the sub-components only
>if the sub-components are independent (that is, they don't interact
>with each other, either directly or indirectly). I don't think that it
>is possible for sub-components of a composite structure to be
>completely independent of each other in software. If nothing else,
>they occupy the same name space and share other resources.

I didn't intend the example as a correct metric, just a general example
of how individual stabilities combine to lessen overall stability.
Nevertheless, I think you're basically restating my earlier objections
to the "stability" metric.

>The bottom line is that the real formula for the stability of a
>composite, while still based on the stabilities of each component, is
>also influenced by the components interactions with each other. This
>is a two-part problem, part static and part dynamic. In other words,
>the stability of a composite made up of interacting components can
>only be determined at run time! And they wonder why we say that
>software is the most complex thing we've undertaken.

It think this depends upon how you look at stability. Personally, I'd
like to evict the term "stability" from the discussion, at least with
respect to proposed metric.

To me, this is a three-part problem. First is the likelihood that a
particular module will change (stability?). Second is the static
coupling between this module and other modules, which can be analyzed
at compile time. Third is run-time effects, as you suggest here.

BTW, it's not clear to me what type of coupling Bob is referring to.
After all, there are at least five to eight different types of coupling
that have been identified in software systems.

>So, we have stability as a function of interactional (or structural)
>factors and environmental factors. In order to use this design
>principle in an actual design, we have to have some idea of both forms
>of stability, even if one is just an ordinal classification of "not
>likely," "maybe," and "quite probably" (will change).

Again, I think it's fine to identify the coupling topology of a design.
You could use this information to determine the ripple effects of
changing a particular module. However, the coupling topology *by
itself* tells you nothing whatsoever about the likelihood of change.

I'm starting to believe that this is largely an issue of terminology.
My preference is to stick with previously-defined terms unless there's
a very good reason to depart. I don't know of any existing work that
talks about intrinsic or extrinsic coupling, even though coupling has
been scrutinized very heavily over the years.

Robert Martin

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
red...@ix.netcom.com (Jerry Fitzpatrick) writes:

>You seem to be advocating that every module should be as decoupled from
>others as possible. Although decoupling if often beneficial, it is not
>an overriding design principle. Decoupling modules often involves added
>complexity and configurability. These increase development and testing
>time, and may decrease reliability and understandability. After all,
>what is the point in decoupling a module which has a 0% chance of
>changing?

This is an excellent point, and one that I should be stressing in
every one of these principles. There are costs associated with all of
them, and there are situations where that cost is not worth paying.

In chapter 2 of my current book (Designing Object Oriented C++
Applications using the Booch Method, Robert C. Martin, Prentice Hall,
1995), on page 127, I do a comparative analysis between a simple C
program, and a C++ program that does the same thing, but that is
designed using these princples. The cost difference is striking. The
simple C program consists of 74 lines of code split between two files.
The C++ program consists of 339 lines of code split between 32 files.

Now this dramatic increase in line count and file count is a bit
misleading. This problem has so little functionality, that the bulk
of the class interfaces in C++ overwhelms it. The number of
*executable* lines of C++ code is far lower than 74. The rest is just
class and interface specification. In a larger, more functional
program, the disparity would not be as great. But the disparity would
still exist. So your point about increased complexity is well taken.

On the other hand, that simple little C program is exactly the kind of
program that would turn into a nightmare within a few years. Whereas
the C++ program is well organized and subdivided, and its
interdependencies are managed such that it will be quite simple to
maintain for a very long time. Moreover, whereas the simple C program
is utterly specific to its purpose, the C++ program has components
that could be reused in different applications from the same domain.

Again, as you say, this is not overriding. A program that has a short
lifecycle probably does not need maintainability or reusability. The
principles that I am advocating in this polythread are geared
primarily for creating applications that are highly maintainable, and
highly reusable. When maintainability and reusability are not
requirements of the design, then these princples are probably not
useful.

Herb Sutter

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
In article <41l6t0$2...@ixnews5.ix.netcom.com>,

red...@ix.netcom.com (Jerry Fitzpatrick) wrote:
>>I suppose this isn't really much of a complaint after all, since the
>>alternative (showing just one significant digit) may have made readers
>>wonder about the rounding, so it's reasonable for him to do what he
>>did... still, what significant statement can you make about two class
>>categories with a particular metric value of 0.82 and 0.88,
>>respectively? Probably even Robert might agree that the answer is
>>likely "not much". Maybe explaining this kind of thing would be worth
>>a footnote in the next book.
>
>Bob certainly isn't the first person to make this mistake. The
>literature is rife with statistical misuse, sometimes intentional. Of
>course, it's is *very* easy to fall into this trap by accident.

Right. However, I don't really think it's a "mistake"; as I said, it was a
reasonable choice. It's just that as I was reading charts and saw numbers
like ".73" and ".78", it made me wonder whether the metric could be made more
useful by eliding extraneous data -- less clutter for the eyes.

Clearly the best (IMHO) way to represent this is as a scatter plot with
category names; the brain is much better grasping at spatial relationships
than at reading dry numbers, and on a plot extra precision doesn't matter.
(Did Bob ever include such a plot in his book (for real categories, not for
the theory part)? Can't remember. If not, strongly suggested for the next
one.)

>I'm sure that he has no intentions of deceiving people or promoting an
>inaccurate metric. As you point out, other developer's might easily
>misinterpet or misuse the information in ways he didn't intend. As you
>say, this isn't really much of a complaint compared to the other good
>information contained in the book.

Definitely. It's still near the top of my "most-recommended" list. Now if
only both Martin and Meyers would get their next books out... ah, hold it, I
sense I'm being a greedy consumer again. Patience... :)

Robert Martin

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
COA...@EUROPA.UMUC.EDU (Ell) writes:

>Robert Martin (rma...@rcmcon.com) wrote:

The ideal is to subdivide the domain analysis such that it can be
designed according to this principle. I have never seen a counter
example. The next principle (#10) will show that abstraction can be
used at any time to redirect the dependencies so that princple #9 can
be achieved. Thus, the worst you would do to the domain model is to
add some extra abstraction. This does the domain model no harm, and
allows it to be designed according to this principle.

Robert Martin

unread,
Aug 28, 1995, 3:00:00 AM8/28/95
to
he...@interlog.com (Herb Sutter) writes:

[Regarding the stability metrics proposed in this principle and in my
book.]

>Clearly the best (IMHO) way to represent this is as a scatter plot with
>category names; the brain is much better grasping at spatial relationships
>than at reading dry numbers, and on a plot extra precision doesn't matter.
>(Did Bob ever include such a plot in his book (for real categories, not for
>the theory part)? Can't remember. If not, strongly suggested for the next
>one.)

Check out page 418 of "Designing Object Oriented C++ Applications
using the Booch Method", Robert C. Martin, Prentice Hall, 1995. It
shows the scatter plot, complete with 1sigma and 2sigma lines.

Herb Sutter

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In article <1995Aug28....@rcmcon.com>,

rma...@rcmcon.com (Robert Martin) wrote:
>he...@interlog.com (Herb Sutter) writes:
>
>[Regarding the stability metrics proposed in this principle and in my
>book.]
>
>>Clearly the best (IMHO) way to represent this is as a scatter plot with
>>category names; the brain is much better grasping at spatial relationships
>>than at reading dry numbers, and on a plot extra precision doesn't matter.
>>(Did Bob ever include such a plot in his book (for real categories, not for
>>the theory part)? Can't remember. If not, strongly suggested for the next
>>one.)
>
>Check out page 418 of "Designing Object Oriented C++ Applications
>using the Booch Method", Robert C. Martin, Prentice Hall, 1995. It
>shows the scatter plot, complete with 1sigma and 2sigma lines.

That's right, it's at the end of the chapter, and the sigma lines are a
perfect touch. But I'll still argue for ubiquitous graphics: how about
replacing most/all numeric tables with graphs and scatter plots, or at least
providing the latter in addition to the raw numbers? It makes it much easier
to identify outliers and improves readability. The fact that I couldn't
remember whether there were any scatter plots (six months after reading the
book) may say something! :)

When reading, I usually skipped the dry tables and waited for you to write
notes like "notice how Category XXX is odd" in the main text; with plots a
quick glance is all that's needed to see all the exceptional cases. This
speed/ease is especially important when a reader comes to a problem that the
author has already spent scores or hundreds of hours thinking about and
understands inside out... the author may know where to look to see the
outliers, but to the reader it's all new (and often quite complex!) and he
needs time -- and tools -- to assimilate it. In general, graphics are
underused in CS and yet make so many things painless (I can give an example of
really advanced propriety financial trading system user interfaces, if
anyone's interested -- movement in particular is a much-underused part of
graphics that I expect we can only see more of in the next few years).

Question for all readers of RCM's book: How many of you actually _read_ the
tables, line by line and looking for outliers? If you did, did you find it
easy or a distraction? If you didn't, why not?

Cheers,

Herb

Scott A. Whitmire

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In <41o45b$q...@ixnews2.ix.netcom.com>, red...@ix.netcom.com (Jerry Fitzpatrick) writes:
>In <nntpuserD...@netcom.com> sco...@advsysres.com (Scott A.
>Whitmire) writes:
>
.Some stuff deleted...

>
>>The bottom line is that the real formula for the stability of a
>>composite, while still based on the stabilities of each component, is
>>also influenced by the components interactions with each other. This
>>is a two-part problem, part static and part dynamic. In other words,
>>the stability of a composite made up of interacting components can
>>only be determined at run time! And they wonder why we say that
>>software is the most complex thing we've undertaken.
>
>It think this depends upon how you look at stability. Personally, I'd
>like to evict the term "stability" from the discussion, at least with
>respect to proposed metric.
>
>To me, this is a three-part problem. First is the likelihood that a
>particular module will change (stability?). Second is the static
>coupling between this module and other modules, which can be analyzed
>at compile time. Third is run-time effects, as you suggest here.
>

Good point. In my work, I've been calling it "volatilty." I have been
concentrating on the likelihood that a design component will change, for
whatever reason. So, we have environmental volatility and structural
volatility (due to dependencies).

Coupling may be one way to measure structural volatility. On the other hand,
it may just be an indicator of potential volatility. The actual likelihood of
a component having to change because of a coupling relationship is the
likelihood of change of the component to which it is coupled. This may or may
not be a highly volatile situation.

Bottom line, let's keep coupling and structural volatility separate for now.
It seems they don't measure the same thing.

Ell

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In <nntpuserD...@netcom.com> sco...@advsysres.com writes:

> In <41o45b$q...@ixnews2.ix.netcom.com>, red...@ix.netcom.com (Jerry
> Fitzpatrick) writes:
> >To me, this is a three-part problem. First is the likelihood that a
> >particular module will change (stability?). Second is the static
> >coupling between this module and other modules, which can be analyzed
> >at compile time. Third is run-time effects, as you suggest here.
> >

> Good point. In my work, I've been calling it "volatilty." I have been
> concentrating on the likelihood that a design component will change, for
> whatever reason. So, we have environmental volatility and structural
> volatility (due to dependencies).
>
> Coupling may be one way to measure structural volatility. On the other hand,
> it may just be an indicator of potential volatility. The actual likelihood of
> a component having to change because of a coupling relationship is the
> likelihood of change of the component to which it is coupled. This may or may
> not be a highly volatile situation.
>
> Bottom line, let's keep coupling and structural volatility separate for now.
> It seems they don't measure the same thing.

So in this #8, we have a weak basis for radically re-arranging the
analysis model.

And the most troubling aspect is that it was said we _"must"_ arrange
things to meet #8's goal. So that we have to sacrifice analysis to meet a
goal of at most limited value.

Elliott


Herb Sutter

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In article <41v4p4$n...@nova.umuc.edu>, COA...@EUROPA.UMUC.EDU (Ell) wrote:
>So in this #8, we have a weak basis for radically re-arranging the
>analysis model.
>
>And the most troubling aspect is that it was said we _"must"_ arrange
>things to meet #8's goal. So that we have to sacrifice analysis to meet a
>goal of at most limited value.

Hmm. Elliott, this might sound disingenuous since Robert's already replied to
you showing that #8 is easy to achieve without "radically re-arranging"
anything (dependencies can be broken by adding ABCs). If you disagree with
this, why not give a specific counterexample?

Ell

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In <41vfhh$4...@steel.interlog.com> he...@interlog.com writes:

> In article <41v4p4$n...@nova.umuc.edu>, COA...@EUROPA.UMUC.EDU (Ell) wrote:
> >So in this #8, we have a weak basis for radically re-arranging the
> >analysis model.
> >
> >And the most troubling aspect is that it was said we _"must"_ arrange
> >things to meet #8's goal. So that we have to sacrifice analysis to meet a
> >goal of at most limited value.

> Hmm. Elliott, this might sound disingenuous since Robert's already replied
> to
> you showing that #8 is easy to achieve without "radically re-arranging"
> anything (dependencies can be broken by adding ABCs). If you disagree with
> this, why not give a specific counterexample?

The last I saw he said something about a principle 10. Perhaps I missed
where he related his #8 to ABCs.

Elliott


Jerry Fitzpatrick

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In <1995Aug27.1...@rcmcon.com> rma...@rcmcon.com (Robert
Martin) writes:

>When maintainability and reusability are not requirements of the


>design, then these princples are probably not useful.

I think your objectives were clearer in articles posted after the one I
responded to.

Surprisingly, maintainabilty and reusability seem to be a low priority
for many developers. It's worth reinforcing the benefits of this focus,
even for small projects. I just wouldn't want people to flip completely
the other direction and start decoupling for its own sake.

Jerry Fitzpatrick

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
Whitmire) writes (with edits):

>In my work, I've been calling it "volatilty." I have been
>concentrating on the likelihood that a design component will change,
>for whatever reason. So, we have environmental volatility and
>structural volatility (due to dependencies).
>
>Coupling may be one way to measure structural volatility. On the other
>hand, it may just be an indicator of potential volatility. The actual
>likelihood of a component having to change because of a coupling
>relationship is the likelihood of change of the component to which it
>is coupled. This may or may not be a highly volatile situation.
>
>Bottom line, let's keep coupling and structural volatility separate
>for now. It seems they don't measure the same thing.

Yes. This is how I would express the relationships as well.

Robert Martin

unread,
Aug 30, 1995, 3:00:00 AM8/30/95
to
COA...@EUROPA.UMUC.EDU (Ell) writes:

>In <nntpuserD...@netcom.com> sco...@advsysres.com writes:

>> Coupling may be one way to measure structural volatility. On the
>> other hand, it may just be an indicator of potential volatility.
>> The actual likelihood of a component having to change because of a
>> coupling relationship is the likelihood of change of the component
>> to which it is coupled. This may or may not be a highly volatile
>> situation.
>>
>> Bottom line, let's keep coupling and structural volatility separate
>> for now. It seems they don't measure the same thing.

>So in this #8, we have a weak basis for radically re-arranging the
>analysis model.

No one said anything about either a weak basis, or a radical rearrangement.
Scott is making the excellent point that a volatile dependency
structure does not guarantee true volatility. This is certainly true.
And in those cases where the designer is willing to risk the structure
of the application on his certainty of a particular module's intrinsic
stability, then he has that right.

Also, the process of reordering dependencies is not a radical
rearrangement. It involves only the creation of an abstraction and
the separation of that abstraction into a module that is separate from
the users and derivatives of that abstraction. Thus, instead of
saying that the GUI module depends upon Squares and Circles, we say
that the GUI module depends upon the abstract Shape class, and that
Squares and Circles also depend upon the Shape class. Instead of
putting the GUI in one module and the shapes in another and having the
GUI module depend upon the Shapes module, we add a third module which
contains the abstract Shape class. Then the GUI module and the Shape
module depend upon the module that contains the abstract shape. This
is not a radical rearrangement of the GUI model, it is simply the
insertion of an abstraction into that model. An abstraction that
probably really belonged there in the first place.

>And the most troubling aspect is that it was said we _"must"_ arrange
>things to meet #8's goal. So that we have to sacrifice analysis to meet a
>goal of at most limited value.

The word "must" is internal to the principle. If you want to conform
to principle #8 you *must* route your dependencies in the direction of
stability. On the other hand, if you don't want to conform to
principle 8, then nobody is going to force you. As I said above, if
you have good reason to think that principle 8 won't help you in a
given circumstance, then you may decide that the cost of conformance
is too great.

For my money, however, an engineer would have to have a very very good
reason not to conform to principle 8. It is easy to do, it does no
damage, and it can save a world of headaches later.

Ell

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In <1995Aug30.2...@rcmcon.com> rma...@rcmcon.com writes:

> COA...@EUROPA.UMUC.EDU (Ell) writes:
> >In <nntpuserD...@netcom.com> sco...@advsysres.com writes:
> >> Coupling may be one way to measure structural volatility. On the
> >> other hand, it may just be an indicator of potential volatility.
> >> The actual likelihood of a component having to change because of a
> >> coupling relationship is the likelihood of change of the component
> >> to which it is coupled. This may or may not be a highly volatile
> >> situation.
> >>
> >> Bottom line, let's keep coupling and structural volatility separate
> >> for now. It seems they don't measure the same thing.

> >So in this #8, we have a weak basis for radically re-arranging the
> >analysis model.

>...


> Also, the process of reordering dependencies is not a radical
> rearrangement. It involves only the creation of an abstraction and
> the separation of that abstraction into a module that is separate from
> the users and derivatives of that abstraction. Thus, instead of
> saying that the GUI module depends upon Squares and Circles, we say
> that the GUI module depends upon the abstract Shape class, and that
> Squares and Circles also depend upon the Shape class. Instead of
> putting the GUI in one module and the shapes in another and having the
> GUI module depend upon the Shapes module, we add a third module which
> contains the abstract Shape class. Then the GUI module and the Shape
> module depend upon the module that contains the abstract shape. This
> is not a radical rearrangement of the GUI model, it is simply the
> insertion of an abstraction into that model. An abstraction that
> probably really belonged there in the first place.

> >And the most troubling aspect is that it was said we _"must"_ arrange
> >things to meet #8's goal. So that we have to sacrifice analysis to meet a
> >goal of at most limited value.

> The word "must" is internal to the principle. If you want to conform
> to principle #8 you *must* route your dependencies in the direction of
> stability. On the other hand, if you don't want to conform to
> principle 8, then nobody is going to force you. As I said above, if
> you have good reason to think that principle 8 won't help you in a
> given circumstance, then you may decide that the cost of conformance
> is too great.

The discussion around #8, to me, has implied more than liberal use of
abstract interface classes. There was the "positional" aspect. However,
if you are saying by #8 that the positional aspect means concrete classes
implement abstract interface classes, I can agree with that. Though I
think the "positional" argument is not really apropo for expressing such
an idea.

Elliott


John DiCamillo

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
COA...@EUROPA.UMUC.EDU (Ell) writes:

>In <1995Aug30.2...@rcmcon.com> rma...@rcmcon.com writes:

>> COA...@EUROPA.UMUC.EDU (Ell) writes:
>> >So in this #8, we have a weak basis for radically re-arranging the
>> >analysis model.

>> Also, the process of reordering dependencies is not a radical


>> rearrangement. It involves only the creation of an abstraction and
>> the separation of that abstraction into a module that is separate from
>> the users and derivatives of that abstraction.

>> >And the most troubling aspect is that it was said we _"must"_ arrange


>> >things to meet #8's goal. So that we have to sacrifice analysis to meet a
>> >goal of at most limited value.

>> The word "must" is internal to the principle. If you want to conform
>> to principle #8 you *must* route your dependencies in the direction of
>> stability. On the other hand, if you don't want to conform to
>> principle 8, then nobody is going to force you. As I said above, if
>> you have good reason to think that principle 8 won't help you in a
>> given circumstance, then you may decide that the cost of conformance
>> is too great.

>The discussion around #8, to me, has implied more than liberal use of
>abstract interface classes. There was the "positional" aspect. However,
>if you are saying by #8 that the positional aspect means concrete classes
>implement abstract interface classes, I can agree with that. Though I
>think the "positional" argument is not really apropo for expressing such
>an idea.

I think the point of principle #8 is not the creation
or existence of AICs, but their placement into modules
or categories. This principle is about the physical
construction and release of programs, more than it is
about their abstract design.

For example, I was recently working on a program which
though it worked okay, was becoming difficult to manage.
It was a single EXE, and adding functionality to it was
taking longer and longer to compile and link. I wanted
to break the program up into several DLLs, but it was
highly interdependent. However, by applying Principle
#8, I was able to create a few DLLs that contained nothing
but Abstract Interface Classes. These libraries effec-
tively isolated previously coupled classes, allowing them
to be placed into separate modules. This in turn reduced
the build time, and greatly simplified testing.

The key was: this change didn't change the "design" of
the system at all. It simply refactored the existing
classes into more flexible units.


--
ciao,
milo
================================================================
John DiCamillo Fiery the Angels Fell
mi...@netcom.com Deep thunder rode around their shores

Robert Martin

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
COA...@EUROPA.UMUC.EDU (Ell) writes:

>The discussion around #8, to me, has implied more than liberal use of
>abstract interface classes.

Indeed. In fact, #8 doesn't really discuss abstraction at all. That
is left to principle #9. Principle #8 simply makes the point that
stable modules should not depend upon instable modules. This
shouldn't be a surprise to anyone. Principle 8 also measures one
factor of stability by measuring couplings. Again, the inferrence
should not be surprising. A module that is heavily depended upon is
more stable (due to the effort required to change it and all its
dependents) than a module that nobody depends upon.

>if you are saying by #8 that the positional aspect means concrete classes
>implement abstract interface classes, I can agree with that. Though I
>think the "positional" argument is not really apropo for expressing such
>an idea.

I am trying to separate the notion of abstraction and stability. I
will reunite them in Principle #9. All principle 8 is trying to do is
establish that there is a positional factor to stability, and that
modules should be arranged so that stable modules do not depend upon
instable modules.

Robert Martin

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
be...@cs.man.ac.uk (Stephen J Bevan) writes:

>[ I've trimmed comp.lang.c++ from the list of newsgroups since IMHO
> this has nothing specifically to do with C++ - bevan ]

[I have put the comp.lang.c++ group back since the last part mentions
C++ explicitly -- rmartin]

>In article <1995Aug27.1...@rcmcon.com> rma...@rcmcon.com (Robert Martin) writes:
> ... On the other hand, that simple little C program is exactly the kind of


> program that would turn into a nightmare within a few years. Whereas
> the C++ program is well organized and subdivided, and its
> interdependencies are managed such that it will be quite simple to

> maintain for a very long time. ...

>In principle I agree, however I'm still left wondering about the
>following cases which tend to sum up some of my experiences :-

>1. Five years down the line and the program hasn't been updated in any
> significant way at all. A symptom of the dependency management is
> that the program it is larger and more complex which has meant that
> the five different maintenance programmers who've been in charge of
> it over the years have had to expend much more effort to understand it.

Granted. And so you could argue that the initial designer made the
wrong choice. Since the program did not change much, he should not
have designed it to be changed.

However, complaining about this is like complaining about all the
money that you are spending in car insurance, or fire insurance. The
cost of software design is cheap insurance when compared to the cost
of difficult maintenance.

>2. For whatever reason the design wasn't flexible enough to cope with
> the changes that needed to be made and the the resulting program
> has had to be (partially) redesigned and rewritten each time a
> significant change came up.

In this case the designers made a critical error. They did not
properly anticipate the kinds of changes that the program would
experience. Well -- sometimes you get the bear, and sometimes the bear
gets you. That is why experience is so important to software design.
The more experienced the designer the more likely that designer will
be to anticipate the way in which the software will evolve. Yet even
the best designers will sometimes fail to anticipate everything
correctly. So in the long run, it is still something of a gamble.

>However, to
>paraphrase Charles Moore "The number of ways a program can be extended
>are infinite" -- I'm also reminded of a design of an abstract
>interface for displaying objects which was thrashed out here in
>comp.object only to have someone note, once various people were
>satisfied with the design, that it didn't cope with transforming the
>object when it is displayed!

To tie this to another thread, this is exactly what makes software
hard. All we can really do, today, is find ways to hedge our bets.
Dependency management is a hedge. The crictical criterion for a hedge
is not that it always works, but that it shifts the odds in favor of
success to the extent that successes outweigh failures.

>What's the point of this? Well I guess I'm fishing for references to
>projects where the design has stood the test of time (or nearly so),
>has been successfully extended in non-trivial ways and the design/code
>is open to public scrutiny or at least there is a *detailed*
>description of it. The last point seems to be the difficult as I've
>sometimes seen claims which amounted to the first two but where the
>design/code was not public and/or the description was at a
>high-level.

I made a presentation of such a project at this year's Object World.
I discussed the results of a 6 man-year project using C++ and OOD with
Dependency management. The results were good. We achieved extremely
high levels of reuse (70-80%), and associated productivity gains (in
some cases 6:1). We also experienced significant stability of design
in the presence of wildly changing requirements.

Not that everything was perfect. We did experience problems which
forced us into redesigns. One such was extremely significant. And
yet the "hedge" has paid off pretty well in the long run.

I cannot publish the design for this project since it is proprietary
to my client. But the principles of design that I am posting are
derived from the practices that we employed (and are still employing)
in this project (and in many other projects over the last 25 years).

If anybody would like to see the outline of that presentation, or
would like me to make that presentation at a user's group or
something, drop me a line.

Ell

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In <1995Aug31.1...@rcmcon.com> rma...@rcmcon.com writes:

> COA...@EUROPA.UMUC.EDU (Ell) writes:
>
> >The discussion around #8, to me, has implied more than liberal use of
> >abstract interface classes.

> Indeed. In fact, #8 doesn't really discuss abstraction at all. That
> is left to principle #9. Principle #8 simply makes the point that
> stable modules should not depend upon instable modules. This
> shouldn't be a surprise to anyone.

It surpirses me. I've never heard of this design principle before this.
And I've read the best of them, and worked with some good'uns too. This
would mean that a module which changes more than a second one can not
refer to the second module. To me that is wholly unnatural, and would
lead to bizarre designs which have little or nothing to do with the
relationship of classes in domain analysis. That would be against the
design principles laid out by Booch, Jacobson, and Rumbaugh to name a few.

> Principle 8 also measures one
> factor of stability by measuring couplings. Again, the inferrence
> should not be surprising. A module that is heavily depended upon is
> more stable (due to the effort required to change it and all its
> dependents) than a module that nobody depends upon.

Just because a module is "heavily" depended on (I guess you mean it
is referred to by, or provides services for many others) does not
necessarily mean it is more stable than the dependers. At least it hasn't
meant that heretofore in design theory. Yes following your principle that
is how our designs would end up. But that's different from saying that
in most programs which exist today, the majority of modules which are
"heavily" depended on are more stable than the classes that depend on
them.

Elliott


Stephen J Bevan

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
[ I've trimmed comp.lang.c++ from the list of newsgroups since IMHO
this has nothing specifically to do with C++ - bevan ]

In article <1995Aug27.1...@rcmcon.com> rma...@rcmcon.com (Robert Martin) writes:


... On the other hand, that simple little C program is exactly the kind of
program that would turn into a nightmare within a few years. Whereas
the C++ program is well organized and subdivided, and its
interdependencies are managed such that it will be quite simple to
maintain for a very long time. ...

In principle I agree, however I'm still left wondering about the
following cases which tend to sum up some of my experiences :-

1. Five years down the line and the program hasn't been updated in any
significant way at all. A symptom of the dependency management is
that the program it is larger and more complex which has meant that
the five different maintenance programmers who've been in charge of
it over the years have had to expend much more effort to understand it.

2. For whatever reason the design wasn't flexible enough to cope with


the changes that needed to be made and the the resulting program
has had to be (partially) redesigned and rewritten each time a
significant change came up.

In the first case you can at least hope that if changes do ever need
to be made then at least they will be easier to make. However, that
leads us to the second point. It is tempting to point the finger at
the designer and say that the design wasn't right initially since it
wasn't flexible enough to cope with the changes. However, to


paraphrase Charles Moore "The number of ways a program can be extended
are infinite" -- I'm also reminded of a design of an abstract
interface for displaying objects which was thrashed out here in
comp.object only to have someone note, once various people were
satisfied with the design, that it didn't cope with transforming the
object when it is displayed!

What's the point of this? Well I guess I'm fishing for references to

Robert Martin

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
COA...@EUROPA.UMUC.EDU (Ell) writes:

>In <1995Aug31.1...@rcmcon.com> rma...@rcmcon.com writes:
>> Indeed. In fact, #8 doesn't really discuss abstraction at all. That
>> is left to principle #9. Principle #8 simply makes the point that
>> stable modules should not depend upon instable modules. This
>> shouldn't be a surprise to anyone.

>It surpirses me. I've never heard of this design principle before this.
>And I've read the best of them, and worked with some good'uns too. This
>would mean that a module which changes more than a second one can not
>refer to the second module.

No, just the opposite. If A changes more than B, then B should not
refer to A. Dependencies should run in the direction of stability.
Stable modules should not depend upon instable modules.

>To me that is wholly unnatural, and would
>lead to bizarre designs which have little or nothing to do with the
>relationship of classes in domain analysis.

Can you justify this with an example?

>That would be against the
>design principles laid out by Booch, Jacobson, and Rumbaugh to name a
>few.

I sincerely doubt whether any of the above researchers would recommend
that stable modules should depend upon instable ones.

To quote Jacobson: (OOSE p. 196) "...the most important
criterion for this subsystem division is predicting what the system
changes will look like and then making the division on the basis of
this assumption."

>> Principle 8 also measures one
>> factor of stability by measuring couplings. Again, the inferrence
>> should not be surprising. A module that is heavily depended upon is
>> more stable (due to the effort required to change it and all its
>> dependents) than a module that nobody depends upon.

>Just because a module is "heavily" depended on (I guess you mean it
>is referred to by, or provides services for many others) does not
>necessarily mean it is more stable than the dependers.

Correct. A heavily depended upon module tends to be stable just
because of the difficulty involved with changing all the dependents.
But that does not mean that the module is more stable than the
dependents. Principle 8 is asserting, however, that you should
arrange your modules such that instable modules depend upon stable
modules.

>At least it hasn't
>meant that heretofore in design theory. Yes following your principle that
>is how our designs would end up. But that's different from saying that
>in most programs which exist today, the majority of modules which are
>"heavily" depended on are more stable than the classes that depend on
>them.

Right.

Ell

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In <1995Sep1.1...@rcmcon.com> rma...@rcmcon.com writes:

> COA...@EUROPA.UMUC.EDU (Ell) writes:
>
> >In <1995Aug31.1...@rcmcon.com> rma...@rcmcon.com writes:
> >> Indeed. In fact, #8 doesn't really discuss abstraction at all. That
> >> is left to principle #9. Principle #8 simply makes the point that
> >> stable modules should not depend upon instable modules. This
> >> shouldn't be a surprise to anyone.

> >It surpirses me. I've never heard of this design principle before this.
> >And I've read the best of them, and worked with some good'uns too. This
> >would mean that a module which changes more than a second one can not
> >refer to the second module.

> No, just the opposite. If A changes more than B, then B should not
> refer to A. Dependencies should run in the direction of stability.
> Stable modules should not depend upon instable modules.

Right, I read it again and didn't have time to correct the error.



> >To me that is wholly unnatural, and would
> >lead to bizarre designs which have little or nothing to do with the
> >relationship of classes in domain analysis.

> Can you justify this with an example?

Say we have a car and tires. Say for our domain context that a car
has its tires. Not the other way around, with a tire having a car. So we
want our car to have one or more tires as a members, or one or more tire
pointers to cars. Now say the nature of tires is changing constantly
while cars do not change as frequently. By #8 I should not refer to the
tires from the car class. Rather I must have the instable tires depend on
the stable car by making car a member of tires, or by giving a tire a
pointer to a car.



> >That would be against the
> >design principles laid out by Booch, Jacobson, and Rumbaugh to name a
> >few.

> I sincerely doubt whether any of the above researchers would recommend
> that stable modules should depend upon instable ones.
>
> To quote Jacobson: (OOSE p. 196) "...the most important
> criterion for this subsystem division is predicting what the system
> changes will look like and then making the division on the basis of
> this assumption."

I do not see here where Jacobson supports the notion above it.



> >> Principle 8 also measures one
> >> factor of stability by measuring couplings. Again, the inferrence
> >> should not be surprising. A module that is heavily depended upon is
> >> more stable (due to the effort required to change it and all its
> >> dependents) than a module that nobody depends upon.

> >Just because a module is "heavily" depended on (I guess you mean it
> >is referred to by, or provides services for many others) does not
> >necessarily mean it is more stable than the dependers.

> Correct.

OK.

> A heavily depended upon module tends to be stable just
> because of the difficulty involved with changing all the dependents.

You lost me. What I meant where you say "Correct." is that there are many
cases in existing software where heavily depended on modules are subject to
more change than the modules depending on them. Reflecting situations such
as in my example above. I was not saying that in existing sw, in most
cases, instable modules depend on stable ones. And further, I was not
saying that such cases were due to the "difficulty involved with changing
all the dependents".

> But that does not mean that the module is more stable than the
> dependents.

When you speak this way are you saying that in current sw not all
dependency runs as you recommend? I said that in my last reply, as
you seem to imply most existing software uses #8. I guess I'm not
sure how this relates to your quote above this last one. [Which
immediateley preceded it in your reply. They were in the same
paragraph:

<