Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

SCRUM and Why the Waterfall Methodology is a Fool's Errand ...

26 views
Skip to first unread message

Jeff Sutherland

unread,
Nov 21, 1995, 3:00:00 AM11/21/95
to
Scott Wheeler wrote:
>
> In Article <30AF37...@vmark.com> Jeff Sutherland writes:
> >... the Waterfall Methodology is fatally
> >flawed and doomed to failure....
>
> Is this news? I was under the impression that this had been
generally
> accepted for at least 20 years.
>
> Scott

It is not news to experienced object-oriented developers. It is
news to a large segment of the MIS community. One of the most
experienced people in the RDBMS community sent me email on this
posting saying the Waterfall method worked flawlessly for him.

At a recent tutorial at an OO Conference, I made the point that
there were a number of companies complaining about not getting
the benefits promising by object technology. In every case that
I have been able to examine closely, they were not using an
iterative/incremental development process (see Pittman, IEEE
Software, Jan 93).

I explained to the attendees that if they use the Waterfall
approach, they can expect to get only minimal productivity gains
using OO technology and there was almost a rebellion in the
room. At least one development manager said he could not adopt
OO if he had to change any of the current (antiquated)
development process.

This is a major issue for the uninitiated and I think the OO
community needs to make it an educational priority whenever we
have the opportunity.

Jeff Sutherland
mailto:jsuth...@vmark.com
http://www.tiac.net/users/jsuth/

Kent Beck

unread,
Nov 21, 1995, 3:00:00 AM11/21/95
to
In <jzq...@bmtech.demon.co.uk> Scott Wheeler

<sco...@bmtech.demon.co.uk> writes:
>
>In Article <30AF37...@vmark.com> Jeff Sutherland writes:
>>... the Waterfall Methodology is fatally
>>flawed and doomed to failure....
>
>Is this news? I was under the impression that this had been generally
>accepted for at least 20 years.
>
>Scott

Then why do some of my clients (at least initially) think that
"iterative development" means "an iteration for analysis, an iteration
for design, ..."? Most development organizations still prefer a
pile-of-paper-based illusion of control to active management of
development risk.

Kent

Michael E. Wesolowski

unread,
Nov 21, 1995, 3:00:00 AM11/21/95
to
On Tue, 21 Nov 1995, Jeff Sutherland wrote:

<snip>

> At a recent tutorial at an OO Conference, I made the point that
> there were a number of companies complaining about not getting
> the benefits promising by object technology. In every case that
> I have been able to examine closely, they were not using an
> iterative/incremental development process (see Pittman, IEEE
> Software, Jan 93).
>
>
I explained to the attendees that if they use the Waterfall
> approach, they can expect to get only minimal productivity gains
> using OO technology and there was almost a rebellion in the
> room. At least one development manager said he could not adopt
> OO if he had to change any of the current (antiquated)
> development process.
>

Excuse my ignorance, but why would you recommend against using the
Waterfull model for OO technology? I've seen this hinted at before but I
don't know the rationale. Thanks.

---------------------------------------------------------------------

Michael Wesolowski (mewe...@freenet.calgary.ab.ca)

Robert C. Martin

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to

Indeed, a major methodologist appears to have the same idea,
Steve Mellor recently said the following in an article in which he
answering Booch with respect to the differences between their methods.

"If you picture the system as a box with the application at the
top, divided into subsystems across the top, and the implementation
at the bottom, then I see you taking slices vertically (ie each slice
covers a piece from top to bottom), and Shlaer-Mellor partitioning
the project horizontally."

--
Robert Martin | Design Consulting | Training courses offered:
Object Mentor Assoc.| rma...@oma.com | OOA/D, C++, Advanced OO
2080 Cranbrook Rd. | Tel: (708) 918-1004 | Mgt. Overview of OOT
Green Oaks IL 60048 | Fax: (708) 918-1023 | Development Contracts.

Robert C. Martin

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to

Robert C. Martin

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
Jeff Sutherland wrote:
>
> I explained to the attendees that if they use the Waterfall
> approach, they can expect to get only minimal productivity gains
> using OO technology and there was almost a rebellion in the
> room. At least one development manager said he could not adopt
> OO if he had to change any of the current (antiquated)
> development process.

This is actually a very serious problem. Many large organizations
have built Waterfall into their accounting system. They "Captialize"
software as it moves through the various waterfall phases. This has
a significant impact on their tax burden.

Clearly they have to change their accounting model in order to move
to iterative development. This is an astounding example of how
interdependent things can become.

Charles C. Lloyd

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
Jeff Sutherland <js...@vmark.com> wrote:
> At a recent tutorial at an OO Conference, I made the point that
> there were a number of companies complaining about not getting
> the benefits promising by object technology. In every case that
> I have been able to examine closely, they were not using an
> iterative/incremental development process (see Pittman, IEEE
> Software, Jan 93).

I would claim that most compaines are not getting the benefits promised by
OO because they have chosen to use C++ rather than a true OO system
(Smalltalk, Objective-C, to name a couple). The C++ environments I've seen
are not conducive to iterative development, while Smalltalk certainly is.
So while a mindset change is required, so is a change of tools.

Charles.
---
Charles Lloyd cll...@giantleap.com
GiantLeap Software PO Box 8734
(702) 831-4630 Incline Village, NV 89452

Patrick D. Logan

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
Charles C. Lloyd <cll...@giantleap.com> wrote:

>I would claim that most compaines are not getting the benefits promised by
>OO because they have chosen to use C++ rather than a true OO system
>(Smalltalk, Objective-C, to name a couple). The C++ environments I've seen
>are not conducive to iterative development, while Smalltalk certainly is.
>So while a mindset change is required, so is a change of tools.

In practice, it seems a "true OO" and iterative development environment
does not solve the problem. The problem is the lack of a good design.
"Architecture driven" is the word folks have been using here lately.

There are plenty of horror stories about people using Smalltalk. It is not
because the language or environment are incapable of supporting success.
Because there are plenty of success stories, too, just as there are for C++.

So all things being equal, I agree with the statement above. But I do not
think it is worth parading it around. I think it is better, generally, to
emphasize good design. Then slip in the pitch that says, "By the way, now
that you're designing so well, how'd you like to speed the process up a bit?"

--
mailto:Patrick...@ccm.jf.intel.com
(503) 264-9309, FAX: (503) 264-3375

"Poor design is a major culprit in the software crisis...
..Beyond the tenets of structured programming, few accepted...
standards stipulate what software systems should be like [in] detail..."
-IEEE Computer, August 1995, Bruce W. Weide

Jeff Sutherland

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
Michael E. Wesolowski wrote:
>Excuse my ignorance, but why would you recommend against using the
>Waterfull model for OO technology? I've seen this hinted at before but I
>don't know the rationale. Thanks.

There has been a lot posted here before on this. You can surf to my Home
Page to get the latest info and from there to my SCRUM page.

Jeff Sutherland
Mailto:jsuth...@vmark.com
http://www.tiac.net/users/jsuth/
(Object Studio 5.0, a better way to do C++ development!)

Robert C. Martin

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
Michael E. Wesolowski wrote:
>
> Excuse my ignorance, but why would you recommend against using the
> Waterfull model for OO technology? I've seen this hinted at before but I
> don't know the rationale. Thanks.
>

The Waterfall method first became important about 30 years ago. At
that time, most projects were very small. A 10,000 line program
weighed a good 40 pounds, and was too large to carry down the hall
to the card reader.

In those days, each engineer did what was right in his own eyes. There
was not "method" for producing software, and none was really needed
in most cases.

However, as projects became more and more complex, some kind of
management scheme was needed to partition the project into measurable
milestones. The division between analysis, design and implementation
was already understood, so it was natural to formalize these activities
into phases, and then to make them entities in a schedule. Managers
could not ask questions such as: "Will you be done with the analysis
on time?"

This worked well for many years. However, there is a built in flaw.
The act of analysis is the act of understanding the problem to be
solved. How does one make any predictions about the schedule of
a project prior to analyzing it (and thus understanding it). For
relatively small projects, gut estimates are reasonable. But as
complexity increases, gut estimates become less and less reliable.

Today, a 10,000 lines program is insignificant. 100,000 lines is
barely respectable. Most of us are working on projects that will
total many hundreds of thousands of lines. Megaline projects are the
rule, not the exception.

How can one predict, with any accuracy at all, how long it will take
to analyse, design and implement a megaline project, prior to having
a reasonable analysis? The task is impossible. This fact is borne
out by the plethora of failures. Last year, Scientific American
published a frightening statistic. The probability of project failure
increases geometically with project complexity. The statistic implied
that there was an asymtote! Thus, there may be an absolute limit
to the complexity we can handle. At least when using Waterfall.

The Waterfall method, as conducted by most organizations today, does
not allow feedback. Errors are corrected in the phase in which they
are found. Analysis errors discovered in the design phase are fixed
in the design, not in the analysis (Many people say that they will
go back and fix the analysis, but this does not take place as a rule).

Moreover, and more importantly, the early phases have no completion
criteria. There is no way to tell if the analysis or design is
actually complete. As a result, they tend to be "finished" on
schedule. This communicates false results to the managers, since
the reality is that the analysis and design are probably not complete
and are probably flawed.

In waterfall, the only deliverable that can be tested for completion
and correctness is the Implementation. And this, of course, is where
we have all the problems.

----

Using the iterative method, we split the project into a series of
vertical slices. Each slice is composed of a small set of project
features. We conduct small waterfalls on each slice, driving the
slice all the way from analysis to implementation.

Upon completing the first slice (which ought to be th most complex
of them all) we have some data. We know how long it took to create
that slice. By extrapolation we can determine how long it will take
to complete the rest of the slice. As slice after slice is completed,
the schedule becomes more and more accurate. Moreover, at the
completion of each slice, we have a valid analysis and design for
that slice. That analysis and design gets fed back into the next
slice.

Thus, the project is subdivided into a set of milestones that have
unambiguous completion criteria. The development process produces
data that managers can use to track and predict the schedule, and
each slice feeds back more information into the next slice.

In short, the difference between Iterative development, and Waterfall
development is the difference between an open-loop and a closed-loop
control system. Closed loop systems (systems that employ feedback) are
generally superior.

vvadl...@vnet.ibm.com

unread,
Nov 22, 1995, 3:00:00 AM11/22/95
to
In article <30B348...@oma.com>,
on Wed, 22 Nov 1995 09:59:29 -0600,
"Robert C. Martin" <rma...@oma.com> writes:
>Michael E. Wesolowski wrote:
>>
dd Excuse my ignorance, but why would you recommend against using the

>> Waterfull model for OO technology? I've seen this hinted at before but I
>> don't know the rationale. Thanks.
>>
>
>The Waterfall method first became important about 30 years ago. At
>Using the iterative method, we split the project into a series of
>vertical slices. Each slice is composed of a small set of project
>features. We conduct small waterfalls on each slice, driving the
>slice all the way from analysis to implementation.
>

Splitting the project into a series of vertical slices with a subset
of features is the preferred method in prototyping the product
A quick demonstration of some important aspects of the product developme
scheme which includes all the components of analysis,design and
implementation is what I have seen in many projects I have been involved
with.

>Upon completing the first slice (which ought to be th most complex
>of them all) we have some data. We know how long it took to create
>that slice. By extrapolation we can determine how long it will take
>to complete the rest of the slice. As slice after slice is completed,

However practical experience suggests that this is not a very reliable
indicator of the work yet to be done.
Inevitably the time of integration between slices is underestimated.
New features are thrown in while development cycle is in progress.

>the schedule becomes more and more accurate. Moreover, at the
>completion of each slice, we have a valid analysis and design for
>that slice. That analysis and design gets fed back into the next
>slice.
>

This is true to a very large extent.

>Thus, the project is subdivided into a set of milestones that have
>unambiguous completion criteria. The development process produces
>data that managers can use to track and predict the schedule, and
>each slice feeds back more information into the next slice.
>
>In short, the difference between Iterative development, and Waterfall
>development is the difference between an open-loop and a closed-loop
>control system. Closed loop systems (systems that employ feedback) are
>generally superior.

Not to be picky but to the output of a closed loop system is fed back
through inverse of the plant ( Control systems terminology) and this is
used as a modifier to the input of the original plant.

Well closed loops for Software development projects are no where near
that kind of smooth functioning. I hope to see one someday ;)


Regards,

Vish Vadlamani

>--
>Robert Martin | Design Consulting | Training courses offered:
>Object Mentor Assoc.| rma...@oma.com | OOA/D, C++, Advanced OO
>2080 Cranbrook Rd. | Tel: (708) 918-1004 | Mgt. Overview of OOT
>Green Oaks IL 60048 | Fax: (708) 918-1023 | Development Contracts.

--
___________________________________________________________
| |
Vish Vadlamani | v v |
IBM - FT Networking | v v |
| v |
vvadl...@VNET.IBM.COM | |
________________________________|_________________________|
All opinions expressed are mine only and not those of IBM |
or my employers. |
__________________________________________________________|

Robert C. Martin

unread,
Nov 23, 1995, 3:00:00 AM11/23/95
to
In article <NEWTNews.81706...@sallys.projtech.com> st...@projtech.com writes:

I suggest that a process that:

1. Identifies the domains (layers) in the system, as the first step
2. Immediately moves to understand (and document) the dependencies
between the domains
3. Analyzes each domain separately and (relatively) independently

is taking on active management of development risk, through explicit
management of the dependencies between the layers. Further, we
begin this very early on in the development process. Finally I say
for one last time, that such a process is not a waterfall.

Agreed, this is not waterfall. Moreover, most of this is very
traditional OO thinking. Our previous confusion was primarily due to
vocabulary. You were contrasting Booch's vertical slices to your
horizontal slices. But you don't really have horizontal slices (in
terms of Analysis, Design, Implementation). YOu have orthogonal slices
in terms of subject areas.

My only comment here is that you still need vertical slices. It would
be a grave flaw to analyse and develop the subject areas separately.
There is no good way to keep them in sync except to cut vertical slices
through the entire product, touching all the subject areas, and make
those slices deliverables in a schedule. In that way, for each
scheduled deliverable, the subject areas must be in sync. This closes
the loop and staves off the integration nightmare.

Robert C. Martin

unread,
Nov 23, 1995, 3:00:00 AM11/23/95
to

Scott A. Whitmire

unread,
Nov 23, 1995, 3:00:00 AM11/23/95
to
In <48vfqc$l...@jobes.sierra.net>, Charles C. Lloyd <cll...@giantleap.com> writes:
>I would claim that most compaines are not getting the benefits promised by
>OO because they have chosen to use C++ rather than a true OO system
>(Smalltalk, Objective-C, to name a couple). The C++ environments I've seen
>are not conducive to iterative development, while Smalltalk certainly is.
>So while a mindset change is required, so is a change of tools.
>

Bull!

Scott A. Whitmire sco...@advsysres.com
Advanced Systems Research
25238 127th Avenue SE tel:(206)631-7868
Kent Washington 98031 fax:(206)630-2238

Consultants in object-oriented development and software metrics.


Paul Johnson

unread,
Nov 23, 1995, 3:00:00 AM11/23/95
to
In article <30B342...@oma.com>, rma...@oma.com says...

>This is actually a very serious problem. Many large organizations
>have built Waterfall into their accounting system. They "Captialize"
>software as it moves through the various waterfall phases. This has
>a significant impact on their tax burden.
>
>Clearly they have to change their accounting model in order to move
>to iterative development. This is an astounding example of how
>interdependent things can become.

I'm not so sure that this is a problem. Bertrand Meyer advocates a "cascade"
lifecycle where you break up a project into clusters (Steve Mellor calls them
"domains"), each of which is small enough for two or three people to do in a
month or two. Then each of these clusters is developed in a mini-Waterfall.
The "requirements document" for each cluster is the interface it offers its
clients.

You could integrate this into the Waterfall Capitalisation accounting system
by declaring each cluster to be a mini-project, and assigning it a value.

Paul.

--
Paul Johnson | GEC-Marconi Ltd is not responsible for my opinions. |
+44 1245 473331 ext 2245+-----------+-----------------------------------------+
Work: <paul.j...@gmrc.gecm.com> | You are lost in a twisty maze of little
Home: <Pa...@treetop.demon.co.uk> | standards, all different.


Robert Cowham

unread,
Nov 23, 1995, 3:00:00 AM11/23/95
to
In article <RMARTIN.95...@rcm.oma.com>,
rma...@rcm.oma.com (Robert C. Martin) wrote:
>
snip
>
>Iterative development, and quick-cycle hacking are two different things.
>In iterative development the iterations are on the order of a few weeks.
>Each iteration produces a more functional version of the product than
>that last. The iterations are *planned* ahead of time, and are even
>given tentative (and highly inaccurate) dates. As the iterations
>proceed, the schedule is updated. The time it took to actually complete
>an iteration is data that can be fed back into the schedule to help
>predict how long it will take to complete the other iterations.
>

Some RAD methods use the technique of timeboxing where the dates of iterations
are fixed, yet the content of iterations is not fixed - items can be slipped
to the next release. An example is DSDM (Dynamic Systems Development Method)
which is gaining a lot of acceptance here in the UK.

---------------------------------
Robert Cowham cow...@logica.com
Logica UK Ltd, 75 Hampstead Rd, London NW1 2PL, UK
Opinions expressed are my own

David Linthicum

unread,
Nov 24, 1995, 3:00:00 AM11/24/95
to
The issue is that when using the "traditional" waterfall methodologies,
as long as you build in OOA/OOD, it's going to take longer than using a
RAD development approach. The tradeoff is OO. Using RAD, developers are
quick to blow by a proper application design to obtain development speed,
however is they are using OO in a waterfall they are not.

This is driven by RAD tool vendors who are selling tools as "programming
without design." MIS management likes the idea of RAD, and therefore are
moving in that direction. It's cheaper, the clients like it, and seems
to provide an effective approach to development. The trouble comes in
when we think that we can actually build good OO software without
considering how to setup the objects. I see RAD application after RAD
application that are so much trash since the developer did not take the
time to build them right the first time.

My 2 cents


Ell

unread,
Nov 24, 1995, 3:00:00 AM11/24/95
to
Robert C. Martin (rma...@rcm.oma.com) wrote:
:...
: Agreed, this is not waterfall. Moreover, most of this is very

: traditional OO thinking. Our previous confusion was primarily due to
: vocabulary. You were contrasting Booch's vertical slices to your
: horizontal slices. But you don't really have horizontal slices (in
: terms of Analysis, Design, Implementation). YOu have orthogonal slices
: in terms of subject areas.

Couldn't it be true that an orthogonal slice is a horizontal layer with
its own layer analysis, design and implementation?

: My only comment here is that you still need vertical slices. It would


: be a grave flaw to analyse and develop the subject areas separately.
: There is no good way to keep them in sync except to cut vertical slices
: through the entire product, touching all the subject areas, and make
: those slices deliverables in a schedule. In that way, for each
: scheduled deliverable, the subject areas must be in sync. This closes
: the loop and staves off the integration nightmare.

While I think this is true, isn't it possible to develop parts of a system
relatively independently, at times, based on specification of the
interfaces between the parts? Which is another reason why establishing
"structural" architecture "early on" is often a good thing. The
interfaces between parts being determined of course by overall system
analysis and consequent overall system architecture.

Elliott

Robert C. Martin

unread,
Nov 25, 1995, 3:00:00 AM11/25/95
to
In article <NEWTNews.81723...@sallys.projtech.com> st...@projtech.com writes:

rmartin said:
> My only comment here is that you still need vertical slices. It would
> be a grave flaw to analyse and develop the subject areas separately.
> There is no good way to keep them in sync except to cut vertical slices
> through the entire product, touching all the subject areas, and make
> those slices deliverables in a schedule. In that way, for each
> scheduled deliverable, the subject areas must be in sync. This closes
> the loop and staves off the integration nightmare.
>

I disagree entirely.

That surprises me. I thought we were coming to terms.

What do mean "to keep them in sync?"

Given any two domains that must work together in some project. If the
two domains are not tested against each other, then the designers will
have no reason to communicate. They will not prove that the services
of one domain satisfy the needs of the other. They will add services
that the other domain doesn't need, and they will not implement
services that the other domain does need. And none of this will be
discovered until final integration.

Consider an application domain that depends on an OODBMS domain.
Why is a grave flaw to build the OODBMS separately from the
application? Why will there be an integration nightmare?

Have you ever worked with an OODB that has never before been tied to
an application? I came close to this once. I was part of a project
which was one of the first users of a major OODB about 4 years ago.
The OODB designers had done a real nice job of designing what they
*thought* application designers would need. But they were wrong about
what we needed. The OODB had lots of bells and whistles that we just
didn't need, and was missing some fundemental features that were
extremely important. It was close to Hell.

That is the real issue. You can't design domains in isolation. You
can't build domains in isolation. Integration between the domains
should be frequent and scheduled.

William D. Gooch

unread,
Nov 25, 1995, 3:00:00 AM11/25/95
to
In article <494h5n$f...@dub-news-svc-3.compuserve.com> David Linthicum <70742...@compuserve.com> writes:

>The issue is that when using the "traditional" waterfall methodologies,
>as long as you build in OOA/OOD, it's going to take longer than using a
>RAD development approach. The tradeoff is OO. Using RAD, developers are
>quick to blow by a proper application design to obtain development speed,
>however is they are using OO in a waterfall they are not.

Good, experienced developers are not prone to falling
into this trap. A lot of people these days seem to think
that the "right" methodology is a panacea. Problem is,
there is no "right" methodology (Alan Kay has said that
none of them are even close).

Some people will do well regardless of (or in spite of)
methodology, or the lack of one. On the other hand, some
people will do poorly regardless of methodology. The
methodology itself isn't a solution, it's merely a crutch.

Ell

unread,
Nov 26, 1995, 3:00:00 AM11/26/95
to
William D. Gooch (goo...@rwi.com) wrote:
: Some people will do well regardless of (or in spite of)
: methodology, or the lack of one. On the other hand, some
: people will do poorly regardless of methodology. The
: methodology itself isn't a solution, it's merely a crutch.

Granted methodolgies are not a solution, does that mean we can throw our
Rational Roses, Objectorys and OMT Tools away since they are only
crutches?

Elliott

William D. Gooch

unread,
Nov 26, 1995, 3:00:00 AM11/26/95
to
In article <498e6r$e...@news4.digex.net> e...@access1.digex.net (Ell) writes:

>Granted methodolgies are not a solution, does that mean we can throw our
>Rational Roses, Objectorys and OMT Tools away since they are only
>crutches?

That depends mostly on who you are working for, and whether
you have the experience to work without a formal definition of
methodology to support you. Less formal methods, and com-
binations of aspects of different formal methodologies, can be
very effective if used well. Plain old seat-of-the-pants hacking
without a plan is fairly unlikely to take you to specific goals.

I have seen methodology lead off on tangents, and even away
from success on occasion, and I think it is best not to take the
formal methodologies too seriously. They can be very effective
if used well, but it is also possible to go through the complete
gamut of a methodology and still not understand what you are
trying to accomplish. Focussing too much on the details of the
methodology itself, rather than the application problem space,
causes this.

David Linthicum

unread,
Nov 27, 1995, 3:00:00 AM11/27/95
to
goo...@rwi.com (William D. Gooch) wrote:

>Good, experienced developers are not prone to falling
>into this trap. A lot of people these days seem to think
>that the "right" methodology is a panacea. Problem is,
>there is no "right" methodology (Alan Kay has said that
>none of them are even close).

I have to agree. However, there should be some sort of formal approach
to the problem. A methdology provides step-by-stop process of moving
through the problem. Good developers are not prone to falling into the
"prototyping death spiral," but I know a lot of programmer who are. :-)
You need some sort of approach, even if it's your own.



>
>Some people will do well regardless of (or in spite of)
>methodology, or the lack of one. On the other hand, some
>people will do poorly regardless of methodology. The
>methodology itself isn't a solution, it's merely a crutch.

So are you saying that we don't need a methology? Or, are you saying
that there is a tradeoff?

-- Dave Linthicum


Kent Beck

unread,
Nov 27, 1995, 3:00:00 AM11/27/95
to
In <NEWTNews.81706...@sallys.projtech.com>
st...@projtech.com writes:
>Paul's text above is indeed how we view it: The first step in the
>Shlaer-Mellor method is to identify the layers, by identifying
>subject matters (we call them problem domains), and depicting
>the dependencies between the layers in a "domain chart".
>
>Then we analyze each domain (each layer) independently of one another.
>...
>I quote again the last three lines of the original post:
>> > ........... Most development organizations still prefer a

>> > pile-of-paper-based illusion of control to active management of
>> > development risk.
>
>I suggest that a process that:
>
>1. Identifies the domains (layers) in the system, as the first step
>2. Immediately moves to understand (and document) the dependencies
> between the domains
>3. Analyzes each domain separately and (relatively) independently
>
>is taking on active management of development risk, through explicit
>management of the dependencies between the layers. Further, we
>begin this very early on in the development process. Finally I say
>for one last time, that such a process is not a waterfall.
>
>-- steve mellor

The worst risks in the world are integration risks, because you can't
possibly find them until you see the whites of their eyes. If the
domains are all well understood, such risk is minimized because you
know where the landmines are buried. This is never the case for
interesting software. Most competitive software developers are forced
to innovate in several domains at once.

I recommend to my clients that they use practices that maximize
communication between developers of dependent layers-
shoulder-to-shoulder development, episodic design (Ward Cunningham's
phrase- see the Episode patterns in http://c2.com/ppr), group CRC
review, and Grady's vertical slice-o-functionality.

I agree with Robert Martin- integrate early, integrate often.

Kent

Matthew B. Kennel

unread,
Nov 27, 1995, 3:00:00 AM11/27/95
to
Robert C. Martin (rma...@rcm.oma.com) wrote:

: I would claim that most compaines are not getting the benefits promised by

: OO because they have chosen to use C++ rather than a true OO system
: (Smalltalk, Objective-C, to name a couple).

: I would claim that many companies are not getting the benefits promised
: by OO because they expect a 'true' OO system to provide those benefits
: automatically.

: Asserting that C++ is in any way responsible for people not achieving
: the benefits of OO is a silly as asserting that people aren't making
: money on the stock market because they are using the wrong broker.

This is not as silly as it seems; it's certainly the case that a broker with lousy
commissions (and worse) a big spread and bad execution can render profitable
strategies not so.

The broker could argue "Well it's not my fault that you aren't gaining the
benefits of good stock selection". This is superficially true, but misses
the overall point, which is to make profits and a bad broker is an
impediment to that.

Thus "the benefits of OO" is a means to easier and better programming and
everything about a language matters, not just its object-ness.

: The benfits of OO come from understanding and applying the principles of
: OOD. These principle lead, irrepective of OOPL, to programs that are
: more maintainable, more robust, and more reusable than those that do
: not. These benefits do not automatically come from any particular
: language. Not even Smalltalk.

By virtue of personal experience I think there are language features
often, but not always, distinct from "object orientation" that make the
programming process

"more maintainable, more robust, and more reusable"

If this were not so why even use C++? Because C++ 'supports' people
programming object oriented systems better than C or Fortran. I think there
are languages that do the same to C++.

: There is nothing about any of the C++ environments that I am aware of
: (and I am aware of a few) that prevent, interfere with, or even slightly
: impede iterative development.

I can think of one right away: the lack of static type inference, where the
declared types of variables can depend on the result of other expressions.

In a langauge which does do this (e.g. Eiffel & Sather for OO languages) you
can change return types and attribute types of one object or subroutines
under development and other dependent pieces of the system will often change
automagically to make things fit and simultaneously retain full static type
checking. It decreases the friction of change.

This is not a trivial benefit.

The lack of GC means that you have to build in logic and facilities for
various kinds of storage management too early in to your design. You end
up fixating upon some representation too early. GC decreases the
friction of changes.

I didn't appreciate either of these until I wrote programs in a langauge
which did them. I'd hate to go back.

: On the other hand, the C++ complile and
: link loop can be longer than the Smalltalk edit and go loop. So quick
: cycle hacking is easier in Smalltalk than in C++. (Unless you are using
: a tool like Object Center which allows quick cycle hacking in C++).

: --

William D. Gooch

unread,
Nov 27, 1995, 3:00:00 AM11/27/95
to
In article <49b08m$l...@dub-news-svc-1.compuserve.com> David Linthicum <70742...@compuserve.com> writes:

>....

>You need some sort of approach, even if it's your own.

Sure, as long as we agree that "approach" need not mean
"formal methodology."

There are tradeoffs, and whether formal methodology is
needed depends on the circumstances. As you said, there
is always "some ... approach" which guides the process,
even if its an unplanned or poorly planned approach. Full
blown formal methodology is fine especially for large projects,
but tends to be too overhead-intensive for small ones.

Patrick D. Logan

unread,
Nov 27, 1995, 3:00:00 AM11/27/95
to
rma...@rcm.oma.com (Robert C. Martin) wrote:

>On the other hand, the C++ complile and
>link loop can be longer than the Smalltalk edit and go loop. So quick
>cycle hacking is easier in Smalltalk than in C++. (Unless you are using
>a tool like Object Center which allows quick cycle hacking in C++).

This is dangerously close to the complaint Robert had about B. Meyer's
recent book and Meyer's section about "C hackers". Martin's quote
above seems to imply that the Smalltalk language and environment, where
it differs from the C++ language and environments, only serves
hacking rather than controlled iteration.

Harumph. To each his own.

BTW I used Object Center two years ago. It is the right direction in
many ways for C++, but at that time was not well suited for developing
large applications. There were severl hoops to jump through and
limitations that prevented truly 100 percent quick cycle "hacking"
as Robert puts it.

Ell

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to
William D. Gooch (goo...@rwi.com) wrote:

Or at least much of it is unneeded for small, and or less complex
projects. Keeping "relative" conceptual distinctions between phases (and
this _does not include_ a temporal stepping between phases), and how
development is viewed in terms of developer vs. user needs is still of
importance for small, and or less complex projects, imo.

Elliott


tut...@velveeta.enet.dec.com

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to

In <jzq...@bmtech.demon.co.uk> Scott Wheeler
<sco...@bmtech.demon.co.uk> writes:
>
>In Article <30AF37...@vmark.com> Jeff Sutherland writes:
>>... the Waterfall Methodology is fatally
>>flawed and doomed to failure....
>
>Is this news? I was under the impression that this had been generally
>accepted for at least 20 years.
>
>Scott

All of the writers in this thread seem to be arguing that the waterfall
method "is dead", and that the iterative method is superior for _any_
project using object technology. Is it not possible that methods
are like computer languages in that you must choose the one which best
matches the problem to be solved, and that no one method is _always_
preferred?

Some time ago I witnessed a project in which a team of 8 people was asked to
redesign an existing legagy system from scratch. Future versions of the
product would add further functionality, but the first phase would only
replace the current system. As such, the requirements were very well known.
The team attempted to develop the product using a spiral methodology, in
which they divided the project into slices, and then tried to develop each
slice using a mini waterfall.

The project was a failure. It was "delivered" 30% late with only 50% of the
functionality. They found that with each design-a-little, code-a-little,
test-a-little interation, that they did a substantial amount of redesign
and recoding of the code at the interfaces between the vertical slices.
They concluded that the total amount of work needed would have been
greatly reduced had they spent more time on the up front analysis and
design.

They took another crack at the system using a waterfall method. This time
they even took the waterfall to an extreme by staying in the design phase
(not creating a single line of code) until the product had been designed
down to the function call level throughout. This time the project
finished on time with a much lower level of defects than expected.

It seems to me that there are times when different methods are appropriate.
When requirements are very well known (as above), or if the level of complexity
of the problem is not very large (as in a college homework assignment), the
waterfall method may be more appropriate. Granted, these conditions do not
apply to many (if not most) of today's real world applications, but I don't
buy that the iterative design method is _always_ more appropriate where
object technology is used.

-- Tim Uttormark
tut...@velveeta.apdev.cs.mci.com


Bill Crews

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to
In article <goochb.16...@rwi.com>,

William D. Gooch <goo...@rwi.com> wrote:
>In article <494h5n$f...@dub-news-svc-3.compuserve.com> David Linthicum <70742...@compuserve.com> writes:
>
>>The issue is that when using the "traditional" waterfall methodologies,
>>as long as you build in OOA/OOD, it's going to take longer than using a
>>RAD development approach. The tradeoff is OO. Using RAD, developers are
>>quick to blow by a proper application design to obtain development speed,
>>however is they are using OO in a waterfall they are not.
>
>Good, experienced developers are not prone to falling
>into this trap. [...]

The entire raison d'etre for methodologies is to formalize what is otherwise
in someone's head so that it may be used by others, so that it may be
used to coordinate the work of many, and so that project results are more
predictable. Years ago, I found that I had a talent for RADing projects
-- that is, expediting projects and surviving with happy clients -- but I
have been struggling project by project since then to formalize how I do it.
Iteration/feedback is certainly an essential ingredient. I won't consider
a methodology without it built in. But vertical slices need _interaction_
or _interplay_, not iteration between them. With horizontal slices,
either interaction or iteration seems to work fine. Unfortunately, each
published OO methodology seems to omit something essential, so I am still
trying to roll my own. Hmmmm... Maybe it is getting to be time to write?

-bc
--
----------------------------------------------------
Bill Crews | cr...@panix.com | New York, NY, USA

Cathy Mancus

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to

In article <49ffbj$o...@dns1.mci.com>, tut...@velveeta.enet.dec.com () writes:
> It seems to me that there are times when different methods are appropriate.
> When requirements are very well known (as above), or if the level of
> complexity of the problem is not very large (as in a college homework
> assignment), the waterfall method may be more appropriate.

I think you have put your finger on a very important, oft-overlooked
(at least by management) point, and it goes beyond the waterfall/iterative
battle. Specifically, the success of most projects depends on how
well the requirements are understood early in the design phase.

The biggest headaches on projects I've seen up close are almost never
due to "bad" design; they are due to design that didn't take into account
requirements that were "discovered" late in the design phase or even
the implementation phase. The same thing can happen in an interative
model, because it is not time- or resource-efficient to scrap half the
design in a late iteration and recode it all.

I believe the biggest weakness in the software business today is
the lack of sufficient process, and understanding of process, to figure out
what you want to build *before* you call in the programmers to design and
implement it. Even the best designers will often lack background in the
domain in which they are working, due to high turnover and large amounts
of contract work.

[The above opinions are my own; no corporate entity is responsible.]

/-------------------------------------------------------------------\
| Catherine Mancus <ca...@zorac.cary.nc.us> |
| PP-SEL, N5WVR "God is a sponge." |
\-------------------------------------------------------------------/

Kent Beck

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to
In <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
> I believe the biggest weakness in the software business today is
>the lack of sufficient process, and understanding of process, to
figure out
>what you want to build *before* you call in the programmers to design
and
>implement it. Even the best designers will often lack background in
the
>domain in which they are working, due to high turnover and large
amounts
>of contract work.

Fairy-land is the only place you know what you're supposed to do before
you start. I've seen three reactions to this- force the client to
commit to something early and deliver a system they don't want, accept
changes throughout the project and end up with a late, low quality
system, or deliver the system in pieces, imagining the long term result
but being prepared to change. This last is what works best for me, and
(giving a brief nod to the title of this thread) what is suggested by
SCRUM.

Manage change.

Kent

Mark Ratjens

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to

In article <30B246...@vmark.com>, <js...@vmark.com> writes:

> At a recent tutorial at an OO Conference, I made the point that
> there were a number of companies complaining about not getting
> the benefits promising by object technology. In every case that
> I have been able to examine closely, they were not using an
> iterative/incremental development process (see Pittman, IEEE
> Software, Jan 93).
>
> I explained to the attendees that if they use the Waterfall
> approach, they can expect to get only minimal productivity gains
> using OO technology and there was almost a rebellion in the
> room. At least one development manager said he could not adopt
> OO if he had to change any of the current (antiquated)
> development process.

My observations substantially agree with Jeff's. If we look at this more
deeply we can see that the software devlopment industry is very mucy in a
transition phase with object technology. Historically, OT impacted
programming first then hit analysis & design and methods. Its impact on change
is far from over, though. There are many areas we can apply the basic tenets
of OT - organisational change being among them.

I am not surprised that Jeff caused such a "rebellion" with managers - people
having their cages rattled is a sure sign that there is some deep underlying
changes in mind set in the offing, or at least resistance to change in mind
set, depedning on the person you are dealing with.

There is a parallel here with management practices in the wider business
community. Business Process Reengineering is having a similar effect. What I
find interesting is that many of the tenets of BPR support what we are trying
to do when we move from linear, activity-oriented management (which is what
the waterfall model basically is) to one of collaborative, result-oriented
management (which is what the iterative, incremental approach is).

I think it is way past time for us to consider turning BPR principles in on
the information technology departments of our businesses. In so doing we will
gain a lot of support for changing the way we traditionally do things while
appealing to management in terms that a being used across a wider problem than
just software development. It is here that we can seriously tackle what it
means to shift paradigms, and produce the same kinds of results that
successfully reengineered businesses enjoy, that is, order-of-magnitude
improvements in productivity, and quality.

For a basic text on BPR refer to Michael Hammer & James Champy,
"Reengineering the Corporation: A Manifesto for Business Revolution".
Personally I don't think any discussion on changing software development
processes should be made without at least some understanding of the ideas this
book contains. You might also find a lot of support for your own arguments if
you are looking to dump the waterfall life-cycle.


Mark Ratjens
Class Technology Pty Limited
E-mail: ma...@class.com.au

James P Shaw

unread,
Nov 28, 1995, 3:00:00 AM11/28/95
to
rma...@rcm.oma.com (Robert C. Martin) wrote:


... deleted text ...

>Asserting that C++ is in any way responsible for people not achieving
>the benefits of OO is a silly as asserting that people aren't making
>money on the stock market because they are using the wrong broker.

OR a poor craftsman blames their tools!
Imagine what the creative genius of Leonardo Da Vinci could have
invented with our current technology.

>The benfits of OO come from understanding and applying the principles of
>OOD. These principle lead, irrepective of OOPL, to programs that are
>more maintainable, more robust, and more reusable than those that do
>not. These benefits do not automatically come from any particular
>language. Not even Smalltalk.

This is true! The main problem will always remain:

1) What is the problem at hand?
2) HOW DO WE SOLVE THE PROBLEM?

Whether you use Smalltalk or C++ or , etc... will not make your
product better. The most important part of system integration (and
software development) will always remain in the conceptualization,
analysis and design phases. It is here that most of HARD work (this
doesn't mean that the implementation is an EASY job) is done.

The language used to IMPLEMENT the system should be based upon the
platform, developer's knowledge and ease of use (and other criteria).
Trying to develop a system in C++ with a bunch of Smalltalk
programmers will make the job harder. And vice-versa!

I think we all can agree that an improperly designed system, no matter
what language is used (OO or procedural), will not fly!

Ciao!
Jim Shaw

The opinions expressed here are my own and do not reflect the opinions
of my company!

David N. Smith

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
writes:

>
>Some time ago I witnessed a project in which a team of 8 people was asked to
>redesign an existing legagy system from scratch. Future versions of the
>product would add further functionality, but the first phase would only
>replace the current system. As such, the requirements were very well known.
>The team attempted to develop the product using a spiral methodology, in
>which they divided the project into slices, and then tried to develop each
>slice using a mini waterfall.
>
>The project was a failure. It was "delivered" 30% late with only 50% of the
>functionality. They found that with each design-a-little, code-a-little,
>test-a-little interation, that they did a substantial amount of redesign
>and recoding of the code at the interfaces between the vertical slices.
>They concluded that the total amount of work needed would have been
>greatly reduced had they spent more time on the up front analysis and
>design.

(1) Re: "the first phase would only replace the current system. As such,
the requirements were very well known." Even if they were going to
simply recode the legacy system in RPG or COBOL or whatever, I'd bet that
no one had a clue what the system really did. Most legacy code has been
bent and twisted beyond any original design. Simply recoding it requires
significant work just to find out what it really does and to design a
clean system to do it.

However, I suspect that the reimplementation effort was intended to build
a framework for significant extensions later. That should have forced a
careful look at not only the real requirements of the legacy system, but
at the real rquirements of the rewrite.

Instead, it looks like they assumed they knew what they were doing and
started coding too soon. Then they kept finding out that they didn't know
and had to fix it, again and again.

While, as Kent says, "Fairy-land is the only place you know what you're
supposed to do before you start", there is no excuse for not taking into
account what you DO know or should be expected to know. They knew they
had to match the functionality of the legacy system. They knew they had
to build a base for further growth. They seem to have not thought through
what they should have thought through.


(2) Maybe I live in a fantasy world, but I've always found it mandatory
to write code in small bits which can be assembled into larger bits,
which can be assembled into yet larger bits until I have a running
system. Each set of bits (parts, methods, classes, whatever) is as
general and flexible as seems reasonable in the given circumstances.

I find that I iterate back and forth between the 'big picture' and
'details'. The big picture tells me what kinds of details are needed.
Then I do the details, watching for factors that might affect the big
picture.

But the whole purpose is to get a system that is readily assembled out of
smaller parts. (And parts that are assembled out of littler parts). Why?
Because it is then easy to reassemble a similar system out of those parts
as requirements change. Because if it is easy to replace a part and
reassemble a system then it can readily survive requirements changes. If
the database interface code hides the characteristics of the database
then a new database can be slid in. If the currency model doesn't assume
dollars, or a rounding method, then, ..., well you get the idea.

The programmer literature is filled with rules/guidelines which derive
from such an approach. Write small methods; classes should have 3-5
responsibilities; minimize class coupling; etc, etc, etc, etc. The rules
are not bad, but the reasons get lost. If you do the things that the
rules say not to do, then you don't get flexible and pluggable components.

But if you DO do them, without understanding WHY, then you don't get
flexible and pluggable components either.

In article <49fplo$n...@ixnews5.ix.netcom.com> Kent Beck,


ke...@ix.netcom.com writes:
>
>Fairy-land is the only place you know what you're supposed to do before
>you start. I've seen three reactions to this- force the client to
>commit to something early and deliver a system they don't want, accept
>changes throughout the project and end up with a late, low quality
>system, or deliver the system in pieces, imagining the long term result
>but being prepared to change. This last is what works best for me, and
>(giving a brief nod to the title of this thread) what is suggested by
>SCRUM.
>
>Manage change.

Amen. And I contend that much of how one manages change is by coding FOR
the eventual and inevitable change (as well as design for it, of course).

In article <RMARTIN.95...@rcm.oma.com> Robert C. Martin,
rma...@oma.com writes:


>
>In article <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
>
> I believe the biggest weakness in the software business today is
> the lack of sufficient process, and understanding of process, to figure out
> what you want to build *before* you call in the programmers to design and
> implement it.
>

>I have serious doubts as to whether such a process is even possible.
>Requirements have always been the least stable part of any project I
>have worked upon. I does no good to be sure of what you are building
>up front, because by the time you get started, the requirements will
>have changed.
>
>Now, perhaps this is because I generally work in a very dynamic
>market. But I think that this is true of a lot of us, and probably
>represents some of the worst problems in the software industry.
>
>For those of us in this situation, up-front process is not the answer.
>Lots and lots of up front domain analysis is also not the answer. The
>answer lies in understanding how to build systems that can tolerate
>high levels of change.

Agreed, but failure to do up-front analysis and design when you have the
information is bad too. Polya, the mathematician, wrote a book called
'How to Solve It'. In it he suggest steps for solving mathematical
problems. One of the most basic, and one which applies most everywhere,
is 'do what you know, then look at the problem again'.

Dave

__________________________________
David N. Smith
dns...@watson.ibm.com
70167...@compuserve.com
IBM T J Watson Research Center
Hawthorne, NY
__________________________________
Any opinions or recommendations
herein are those of the author
and not of his employer.

David N. Smith

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In article <49hs6g$i...@brtph500.bnr.ca> Cathy Mancus, man...@bnr.ca
writes:
> This works well if the customer knows more or less what he wants and
>has it documented, but continues to pass down small changes during the
>life cycle of the project. The key word is SMALL, as in "does not
>require redesigning the whole product".

As I read this I was reminded of a book called, I think, House, by Tracy
Kidder. I actually read a long excerpt in The Atlantic, but I'm certain
there was a book too.

It's about a small group of designer/builders who do custom houses and a
house that they do for a couple. It's the spec changes that happen as the
house is being built, and reactions to them, that make the story.

Fred Morris

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
I've been watching this whole thing. It's like EST or Scientology. Take a
bunch of words and twist 'em with special meanings so that they're a
self-consistent universe. Then take that self-consistent universe and the
language that works in it and publicize all those nice pat aphorisms that
fall out and watch the fun as people argue over the meanings; and of
course the people that understand the self-consistent mind palace know
PeeWee Herman's word-of-the-day and the rest of us are parents coming home
and wondering why the kid screams every time we say the word "door".

Minsky observes in _Society of Mind_ that mathematics is consistent
because we start with a set of consistent propositions and see what we can
build from them. Whereas in the real world, thought and systems are built
from what works.

"Waterfall Methodology": why not say "a single analysis-design-build
cycle"? Because some of you would disagree even with that. So a nice
hand-wave like "Waterfall Methodology" allows you to agree in a rough
sense. Much as we have concepts like "awareness" or "concept" or "blue",
that we don't agree exactly on, but that we use. They work just fine until
you go to order some blue paint and assume that it will match exactly the
blue paint that's already on your house or car. And that's the problem
here, you are using high-falutin' general concepts as though they have any
precision; which they don't. One person contributing to this thread has
caught onto that, and alludes to it with a reference to a famous book on
solving mathematical problems.

As for slice approaches and all that, most of them have been around for
years, but strangely took a big leap in popularity when MicroSoft started
using C++ and (coincidentally, not causally) released Visual BASIC. I
won't pretend at any economic modeling jargon, but companies have for
years made strategic alliances with suppliers when working on large
systems. That's the way it's done, that's the way the risk is shared, and
that's where I'd recommend that managers go to find case studies, in a
jargon they're likely to already comprehend. This thread identifies a
distinction between inhouse and outhouse (pun intentional), and then
elevates it to theoretical levels. feh. Go look at the real-world
evolution of software systems; go look at how a tool is created and then
used, and then iterated. You think I'm talking about C++? Well, maybe.
Maybe the class libraries. But no, that's not really it at all; to stay
there misses the mark. No, take Microsoft Word; please. First nobody knew
what a word processor was, now Word supports the only cross-platform word
processor macro virus. Or Excel: is it a spreadsheet or a database query
tool or an intelligent form or a database? Not even Microsoft knows for
sure. Personally though I'll take Clarisworks; it possesses a personality
trait that I call "friendliness", which probably relates to some other
traits I call "integrity" and "consistency".

If you're going to design tools, you cannot predict the ways that they
will be used. If you make broken tools, somebody will use it because it's
broken, and when you fix it they will complain that you destroyed a vital
feature. The best thing you can hope for, in any tool, is that ideal
general concept of "personality": that people will generate an
intersubjective consensus regarding its use and future from interacting
with it. Certainly the memes that have been expressed here exhibit
personality from my vantage; and it's not personality I feel like wasting
much time on. To twist up a recent Dilbert comic strip, why can't a
database be mauve if atomic particles can have color and charm? The color
of blue pigment particles and their charm has little bearing on the color
of my house or its charm: the house's personality overrides that of the
atomic particles, something that would never happen in C++, because the
house's personality is never declared, it is discerned.

Time for you dinosaurs to sprout wings and fly. I won't bother with my
company e-mail address, I doubt they'd consider it productive for me to
answer any replies this thread might generate.

Compared to the Misky _Society of Mind_ CD-ROM, this thread... well,
what's to compare?

--

Fred Morris
m3...@halcyon.com
m3...@slime.atmos.washington.edu

Robert C. Martin

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In article <49d4fp$9...@news.jf.intel.com> "Patrick D. Logan" <patrick...@ccm.jf.intel.com> writes:

rma...@rcm.oma.com (Robert C. Martin) wrote:

>On the other hand, the C++ complile and
>link loop can be longer than the Smalltalk edit and go loop. So quick
>cycle hacking is easier in Smalltalk than in C++. (Unless you are using
>a tool like Object Center which allows quick cycle hacking in C++).

This is dangerously close to the complaint Robert had about B. Meyer's
recent book and Meyer's section about "C hackers". Martin's quote
above seems to imply that the Smalltalk language and environment, where
it differs from the C++ language and environments, only serves
hacking rather than controlled iteration.

Harumph. To each his own.

My apologies. The use of the word "hacking" was inappropriate. I did
not mean it in a disparaging way. I have used the term "quick cycle
hacking" for for awhile and simply did not consider the negative
connotation. Let me state for the record that I do not consider the
very short edit/test loop that Smalltalk provides to be a bad
thing. Quite the contrary I think it is a great benefit. I wish the
C++ compile/link loop was as fast.

Cathy Mancus

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to

In article <49fplo$n...@ixnews5.ix.netcom.com>, ke...@ix.netcom.com (Kent Beck ) writes:

> In <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
>> I believe the biggest weakness in the software business today is
>>the lack of sufficient process, and understanding of process, to
>>figure out what you want to build *before* you call in the programmers
>>to design and implement it. Even the best designers will often lack
>>background in the domain in which they are working, due to high turnover

>>and large amounts of contract work.

> Fairy-land is the only place you know what you're supposed to do before
> you start.

Nonsense. The problem usually occurs because (1) the developers don't
know who their real users are, and "discover" new ones halfway through
the design (and of course they have new requirements); (2) no one has
bothered to write down the requirements in a clear, detailed document.
There seems to be an attitude in industry that good analyst == good designer
== good implementor. It just ain't so; analysis requires very different
skills from programming and should be done by people who possess those skills.

I've even seen re-implementations of legacy products that didn't work
because the new product team didn't have any experts from the previous product
development team! If ever there is a time when you *do* know what you're
supposed to do before you start, it's redesigning a legacy system.

> I've seen three reactions to this- force the client to
> commit to something early and deliver a system they don't want,

A better response would be "get the client to decide what he wants
before you start design". This may mean shipping the product "late" (but
it isn't really late because the original ship date wasn't realistic if
the customer doesn't know what he wants).
In what other business would we behave this way? Does an architect
throw away a half-finished house because the buyers didn't know what they
wanted when they started and didn't like it when they saw it? No.
He will make changes as requested, but there will be time and budget
impacts which he will pass on to his customers.

> accept changes throughout the project and end up with a late, low quality
> system,

This works well if the customer knows more or less what he wants and


has it documented, but continues to pass down small changes during the
life cycle of the project. The key word is SMALL, as in "does not
require redesigning the whole product".

> or deliver the system in pieces, imagining the long term result


> but being prepared to change.

This may or may not work depending on the domain. The more integrated
the system must be, the harder it will be to do this well, because you
must at least understand all the requirements well enough to design the
interfaces at the beginning. Otherwise shipping new pieces may force
you to redesign, reimplement, and reship the pieces you thought you had
finished.

> Manage change.

Small changes are manageable. Enormous changes that change the entire
focus/design/purpose of the product are not.

Robert C. Martin

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In article <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:

I believe the biggest weakness in the software business today is
the lack of sufficient process, and understanding of process, to figure out
what you want to build *before* you call in the programmers to design and
implement it.

I have serious doubts as to whether such a process is even possible.


Requirements have always been the least stable part of any project I
have worked upon. I does no good to be sure of what you are building
up front, because by the time you get started, the requirements will
have changed.

Now, perhaps this is because I generally work in a very dynamic
market. But I think that this is true of a lot of us, and probably
represents some of the worst problems in the software industry.

For those of us in this situation, up-front process is not the answer.
Lots and lots of up front domain analysis is also not the answer. The
answer lies in understanding how to build systems that can tolerate
high levels of change.

--

Robert C. Martin

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In article <49fplo$n...@ixnews5.ix.netcom.com> ke...@ix.netcom.com (Kent Beck ) writes:

Fairy-land is the only place you know what you're supposed to do before

you start. I've seen three reactions to this- force the client to
commit to something early and deliver a system they don't want, accept


changes throughout the project and end up with a late, low quality

system, or deliver the system in pieces, imagining the long term result
but being prepared to change. This last is what works best for me, and
(giving a brief nod to the title of this thread) what is suggested by
SCRUM.

Manage change.

Well said.

Matthew Fuchs

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
When all the elements/levels of the system, from analysis docs to
object code can be viewed simultaneously and everyone has their own
workstation, iteration is the obvious way to work. When access to the
computer is extremely expensive, physically onerous, with long
turnaround, Waterfall (i.e., don't program until you know what your
are doing) makes economic sense. People just may not be aware of when
the economics change.

I suspect that you'll do better convincing people to switch to an
iterative development if you explain that Waterfall was great for its
time, but that new technology can liberate them. After all, they're
not being stupid on purpose.

Robert C. Martin (rma...@oma.com) wrote:
: Michael E. Wesolowski wrote:
: >
: > Excuse my ignorance, but why would you recommend against using the
: > Waterfull model for OO technology? I've seen this hinted at before but I
: > don't know the rationale. Thanks.
: >

: The Waterfall method first became important about 30 years ago. At
: that time, most projects were very small. A 10,000 line program
: weighed a good 40 pounds, and was too large to carry down the hall
: to the card reader.

...

: In short, the difference between Iterative development, and Waterfall
: development is the difference between an open-loop and a closed-loop
: control system. Closed loop systems (systems that employ feedback) are
: generally superior.

Matthew Fuchs
fu...@cerc.wvu.edu
http://www.cerc.wvu.edu/~fuchs


David Harmon

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In article <RMARTIN.95...@rcm.oma.com>,

Robert C. Martin <rma...@oma.com> wrote:
>
>Requirements have always been the least stable part of any project I
>have worked upon.

I hope this illustrates why some of us have a problem with your previous
use of the word "stable". Requirements are the thing upon which
everything else depends, and thus by your previous definition are
ultimately "stable". Yet here we see how false this can be. We may all
wish for requirements to be stable, but it doesn't make it so. Using
"stable" in that way is quite misleading; there must be some other word
that means what you mean.

Kent Beck

unread,
Nov 29, 1995, 3:00:00 AM11/29/95
to
In <49hs6g$i...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
>
>In article <49fplo$n...@ixnews5.ix.netcom.com>, ke...@ix.netcom.com

(Kent Beck ) writes:
>> Fairy-land is the only place you know what you're supposed to do
before
>> you start.
>
> Nonsense.
Them's fighting words. Fortunately, by the time you're done, I think we
agree far more than we disagree, so I won't have to fedex you my
gauntlet.

>The problem usually occurs because (1) the developers don't
>know who their real users are, and "discover" new ones halfway through
>the design (and of course they have new requirements); (2) no one has
>bothered to write down the requirements in a clear, detailed document.

The act of writing a system changes the users' understanding of what it
should do. Even if someone writes document (2), it will change with the
delivery of the first demo.

>> or deliver the system in pieces, imagining the long term result
>> but being prepared to change.
>

> This may or may not work depending on the domain. The more
integrated
>the system must be, the harder it will be to do this well, because you
>must at least understand all the requirements well enough to design
the
>interfaces at the beginning. Otherwise shipping new pieces may force
>you to redesign, reimplement, and reship the pieces you thought you
had
>finished.

Projects either do this (that is, re-engineer and refactor) or they pay
an enormous price. You can reduce the cost of the necessary on-going
engineering by:
1) having comprehensive test suites so you can redesign and
reimplement with confidence and
2) developing consistently with patterns, which tend to buy you
flexibility early in the lifecycle

You don't really know how the system should be built until you've built
it. It is exactly those things you don't know until two years later
that are most valuable. Discarding them because programming used to be
hard is short sighted at best.

Kent

Ell

unread,
Nov 30, 1995, 3:00:00 AM11/30/95
to
Robert C. Martin (rma...@oma.com) wrote:
: In article <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
:: I believe the biggest weakness in the software business today is
:: the lack of sufficient process, and understanding of process, to
:: figure out what you want to build *before* you call in the programmers to
:: a design and implement it.


: I have serious doubts as to whether such a process is even possible.
: Requirements have always been the least stable part of any project I
: have worked upon.

How does the fact that requirements change, make "process" not possible?
A good "process" fully takes changing requirements into account as a part
of the process. I take it you do not use any "process" including Booch?

:...
: For those of us in this situation, up-front process is not the answer.


: Lots and lots of up front domain analysis is also not the answer. The
: answer lies in understanding how to build systems that can tolerate
: high levels of change.

Process is not just upfront, it just is. Good process, allows for
changing requirements. A good process within a good methodolgy is
capable of building systems which tolerate high levels of change.

Elliott

Ell

unread,
Nov 30, 1995, 3:00:00 AM11/30/95
to
In article <49fplo$n...@ixnews5.ix.netcom.com> ke...@ix.netcom.com (Kent
Beck ) writes:
:
: Fairy-land is the only place you know what you're supposed to do before
: you start.

Good domain/area and specific application analysis can help one see at
least the major outline of what needs to be done.

: I've seen three reactions to this- force the client to


: commit to something early and deliver a system they don't want, accept
: changes throughout the project and end up with a late, low quality

: system, or deliver the system in pieces, imagining the long term result


: but being prepared to change.

Good "process", including domain/area analysis and specific application
analysis, should allow _accurately_ "imagining the long term result, but
being prepared to change."

: Manage change.

And the major changes to be concerned about are changes in the logical
(what) requirements. Of course physical implementation should allow this
easy changing.

Elliott

The Graphical Gnome

unread,
Nov 30, 1995, 3:00:00 AM11/30/95
to
In article <RMARTIN.95...@rcm.oma.com>,

rma...@oma.com (Robert C. Martin) wrote:
>In article <49fplo$n...@ixnews5.ix.netcom.com> ke...@ix.netcom.com (Kent Beck
) writes:
>
> Fairy-land is the only place you know what you're supposed to do before
> you start. I've seen three reactions to this- force the client to

> commit to something early and deliver a system they don't want, accept
> changes throughout the project and end up with a late, low quality
> system, or deliver the system in pieces, imagining the long term result
> but being prepared to change. This last is what works best for me, and
> (giving a brief nod to the title of this thread) what is suggested by
> SCRUM.
>
> Manage change.
>
>Well said.

Very well said. But not always practical.

I'm working on a automation department and all our projects are inhouse
projects. In this way it is impossible to manage changes. We somtimes have to
undo weeks of work, just because some manager B desides to have it the other
way around than desided by manager A. And if you manager is afraid of B and
not of A, your in a hell of trouble.

This does not happen once in a project, but about once every week.

My last project ran for 4 years (planned 3.5, so not to bad).

Working in a admosphere like this means, trying to convince managers that the
people I'm making the program for like it the way it was before. Somtime you
succes, most of the time not.

Working in very small parts and adding those parts to the working version is
the only way these projects are manageble.

R.E. den Braasem (ka The Graphical Gnome (r...@ktibv.nl))

Kevin Cline

unread,
Nov 30, 1995, 3:00:00 AM11/30/95
to
In article <49hs6g$i...@brtph500.bnr.ca>,
Cathy Mancus <ca...@zorac.cary.nc.us> wrote:
>
> Nonsense. The problem usually occurs because (1) the developers don't

>know who their real users are, and "discover" new ones halfway through
>the design (and of course they have new requirements); (2) no one has
>bothered to write down the requirements in a clear, detailed document.

Writing down the requirements for a 500K SLOC system in a clear detailed
document is almost impossible; the resulting 1000 page document is too much
for anyone to comprehend. For large projects, the surest way to success
is to analyze a little, build a little, and then build some more.

>There seems to be an attitude in industry that good analyst == good designer
>== good implementor. It just ain't so; analysis requires very different
>skills from programming and should be done by people who possess those skills.
>

This is generally contrary to my experience. Analysis does require
one skill that is not required for coding: the analyst must work with
the user to find out what the user wants. But the "analysts" I have seen
who weren't comfortable doing implementation produced analysis documents
that were nearly useless.

> I've even seen re-implementations of legacy products that didn't work
>because the new product team didn't have any experts from the previous product
>development team! If ever there is a time when you *do* know what you're
>supposed to do before you start, it's redesigning a legacy system.
>

True.

> A better response would be "get the client to decide what he wants
>before you start design".

Clients never know exactly what they want; generally they don't know
what is or isn't possible with current technology. Your client won't
be when he specifies a system based on character cell terminals, and
you build it for him, and he then sees his competitor's system with a
GUI that was built just as cheaply. Legally, you are not at fault,
but you haven't done your job.

Through iterative implementation, the engineers and the clients can
jointly decide on core functionality, implement it, and then make
an informed decision on the next thing to do.

> In what other business would we behave this way? Does an architect
>throw away a half-finished house because the buyers didn't know what they
>wanted when they started and didn't like it when they saw it? No.
>He will make changes as requested, but there will be time and budget
>impacts which he will pass on to his customers.
>

This analogy is widely used but is totally incorrect. Building a
house from plans is a manufacturing process, similar to making a new
copy of a finished computer program. Implementating a computer
program from requirements or design documents is a design process, similar
to creating the detailed plans for the house. The correct analogues between
designing and building a house and designing and building a computer system
are:

House Computer System

General Requirements Analysis
(desired style, room sizes,
cost, etc)

Site Selection, Design
Floor Plan,
Surface Materials & Furnishings

Detailed Blue Prints, Implementation
(wiring, plumbing, structural
design, etc. )

Construction Software duplication

--
Kevin Cline

Rick DeNatale

unread,
Dec 1, 1995, 3:00:00 AM12/1/95
to
In article <49i04g$p...@cloner2.ix.netcom.com>, ke...@ix.netcom.com (Kent
Beck ) wrote:

>>The problem usually occurs because (1) the developers don't
>>know who their real users are, and "discover" new ones halfway through
>>the design (and of course they have new requirements); (2) no one has
>>bothered to write down the requirements in a clear, detailed document.

>The act of writing a system changes the users' understanding of what it
>should do. Even if someone writes document (2), it will change with the
>delivery of the first demo.

This last effect is very important. There are many of these 'wicked
problems' where the problem statement is wrapped up in the solution in
such a way that implementing a solution affects and redefines the problem.
This happens in the 'real world' all the time. Building a new airport
changes road traffic patterns for example.

The first documented case of this effect in a computer system that I'm
aware of was the New York Times editing system. After many requirement
sessions the system was implemented. Then when the users sat down in front
of it they said, 'yes, you did exactly what we told you to do, but now
that it's implemented I understand what it really needs to do, and this
isn't it.'

The difference between rapid experimental programming/prototyping, a
system like this, and the 'Application backlog' is merely one of
time-scale. The wide-spread availability of personal computers, and
user-friendly applications has meant that users are less likely to put up
with the old computer technology driven designs of applications and want
applications that meet their needs rather than the computer or the systems
analyst's.
This drives the need for iterative approaches, and makes implementation
technology which supports such approaches very attractive.

--
Rick DeNatale
Still looking for a cool signature ;-)

Cathy Mancus

unread,
Dec 1, 1995, 3:00:00 AM12/1/95
to

In article <49l8eo$7...@sun132.spd.dsccc.com>, kcl...@sun132.spd.dsccc.com (Kevin Cline) writes:

> Cathy Mancus <ca...@zorac.cary.nc.us> wrote:
>>There seems to be an attitude in industry that good analyst == good designer
>>== good implementor. It just ain't so; analysis requires very different
>>skills from programming and should be done by people who possess those skills.

> This is generally contrary to my experience. Analysis does require
> one skill that is not required for coding: the analyst must work with
> the user to find out what the user wants. But the "analysts" I have seen
> who weren't comfortable doing implementation produced analysis documents
> that were nearly useless.

I know an awful lot of coders who are uncomfortable with design
documentation, any form of meaningful analysis, or much of anything except
writing and debugging code. These are the people whom you must keep away
from the analysis and high-level design stages. Also, coders tend to
think in terms of code rather than high-level concepts, which makes it
hard for them to communicate with end-users.

>> A better response would be "get the client to decide what he wants
>>before you start design".

> Clients never know exactly what they want; generally they don't know
> what is or isn't possible with current technology. Your client won't
> be when he specifies a system based on character cell terminals, and
> you build it for him, and he then sees his competitor's system with a
> GUI that was built just as cheaply. Legally, you are not at fault,
> but you haven't done your job.

As I stated previously, small changes are not a problem. Very
early in the process, you should be able to find out what your client
wants in terms of VT100's vs GUI's, how extensible should the system be,
etc. If you don't tell him about his options and the impact of his
choices *up front*, you aren't doing your job. Part of your job consists
of asking him hard questions.
There will always be omissions from this process that will require
later changes. That's OK, particularly with a good iterative model.
But the omissions should be small for the most part.



> Through iterative implementation, the engineers and the clients can
> jointly decide on core functionality, implement it, and then make
> an informed decision on the next thing to do.

I am not against iterative implementation, but I don't believe it
to be a magic bullet.

James Gerber

unread,
Dec 1, 1995, 3:00:00 AM12/1/95
to
rma...@rcm.oma.com (Robert C. Martin) wrote:

>
> I would claim that most compaines are not getting the benefits promised by
> OO because they have chosen to use C++ rather than a true OO system
> (Smalltalk, Objective-C, to name a couple).

"It is a poor workman that blames his tools"

Maybe people need a more robust system than Smalltalk, which still clings to
the obviously false idea that the world can be modeled by single inheritance.
--
-----------------------------------------------------------------
| /\ () | | | |
|| | __, _ _ _ _ , /\/| _ ,_ | | _ ,_ |
|| | / | / |/ |/ | |/ / \_ / ||/ / | |/ \_|/ / | |
| \_|/\_/|_/ | | |_/|__/ \/ /(__/ |__/ |_/ \_/ |__/ |_/ |
| /| |
| \| |
| |
| E-mail: jge...@voicenet.com |
| Phone: 609.346.5888 |
| FAX: 609.346.5106 |
-----------------------------------------------------------------

Alan Lovejoy

unread,
Dec 1, 1995, 3:00:00 AM12/1/95
to
In <denatale-011...@vyger317.nando.net> dena...@nando.net

(Rick DeNatale) writes:
>The first documented case of this effect in a computer system that I'm
>aware of was the New York Times editing system. After many requirement
>sessions the system was implemented. Then when the users sat down in
front
>of it they said, 'yes, you did exactly what we told you to do, but now
>that it's implemented I understand what it really needs to do, and
>this isn't it.'

This is what I call the "Gordian Knot Of Analysis." The design of the
program depends upon the requirements. The requirements depend upon
the business process. The business process depends upon the tools
used to perform the process--such as the computer programs that are
used. Thus we have a classic "three body problem," and the only way
to converge upon the best solution is to iteratively solve the problem.

But that doesn't mean "just start coding." That's even worse!!!
Iterative algorithms generally work better when they can use good
initial approximations and hints. It also helps to have enough
experience to know when the users are probably mistaken and/or don't
understand the implications of the technology. Give them what they
said they wanted, but design it to be able to do what they probably
wanted without much extra effort. Then when they ask for it, estimate
the job at an extra six months, implement it in two weeks, and then go
on a long vacation :-).

--Alan Lovejoy

Robert C. Martin

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
In article <49hs6g$i...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:

In article <49fplo$n...@ixnews5.ix.netcom.com>, ke...@ix.netcom.com (Kent Beck ) writes:
> Fairy-land is the only place you know what you're supposed to do before
> you start.

Nonsense. The problem usually occurs because (1) the developers don't


know who their real users are, and "discover" new ones halfway through
the design (and of course they have new requirements);

True, but then many of us work in environments where this is
inevitable. i.e. when you are producing a product for sale to a
market at large, the hot requirements happen to be those requirements
that the current hot customer thinks are important. And with every
new customer there is another new set of hot requirements.

(2) no one has
bothered to write down the requirements in a clear, detailed
document.

Again, there are cases where this is inevitable. One of my clients,
for example, has asked us to develop a system for him, but is unsure
about the details. He needs to "see it work" and show it to *his*
customer before he can be sure that what he said he wanted, is really
what he wants.

There seems to be an attitude in industry that good analyst == good
designer == good implementor. It just ain't so; analysis requires
very different skills from programming and should be done by people
who possess those skills.

There is truth to this. There are those who specialize in analysis, or
design, or implementation; and can do one better than the others.
However, I have found that the *best* analysts are also the *best*
designers and *best* implementors.

I've even seen re-implementations of legacy products that didn't

work because the new product team didn't have any experts from the


previous product development team! If ever there is a time when
you *do* know what you're supposed to do before you start, it's
redesigning a legacy system.

I have seen a slightly different situation. The team that was
re-implementing the legacy system had plenty of domain experts.
However the legacy system was not dead, and was being actively
maintained while the reimplementation was in process. As a result,
the reimplementation team had to "keep up" with the drifting
requirements of the legacy system. Needless to say, the
reimplementation took forever, and when it was "completed" it was a
mess.

> I've seen three reactions to this- force the client to
> commit to something early and deliver a system they don't want,

A better response would be "get the client to decide what he wants


before you start design".

Ha! Again I say HA! This may work in some environments, but it
definitely does not work in mine. My clients change their
requirements regularily.

Have you ever had a contractor build a house for you? There are a
million decisions to be made; and you have to make them without
actually "seeing" them. When those decisions are implemented, there
are inevitably things that are wrong and simply have to be fixed. So
it is with some software environments, only several orders of
magnitude worse.

In what other business would we behave this way? Does an architect
throw away a half-finished house because the buyers didn't know what they
wanted when they started and didn't like it when they saw it? No.
He will make changes as requested, but there will be time and budget
impacts which he will pass on to his customers.

This is the more realistic mode of operation. Acknowledge, up front,
that there are going to be changes, and set up a policy for dealing
with them.

This works well if the customer knows more or less what he wants and
has it documented, but continues to pass down small changes during the
life cycle of the project. The key word is SMALL, as in "does not
require redesigning the whole product".

What the customer thinks is SMALL may not be what the designers think
is small.

> or deliver the system in pieces, imagining the long term result
> but being prepared to change.

This may or may not work depending on the domain. The more integrated


the system must be, the harder it will be to do this well, because you
must at least understand all the requirements well enough to design the
interfaces at the beginning. Otherwise shipping new pieces may force
you to redesign, reimplement, and reship the pieces you thought you had
finished.

In fact, this kind of incremental strategy works *best* in a highly
integrated system. The interfaces do *not* need to be fully
understood to deliver *part* of the system. The interfaces can be
worked out as the system grows and evolves.

> Manage change.

Small changes are manageable. Enormous changes that change the entire
focus/design/purpose of the product are not.

Yet these are precicely the kind of changes that we *need* to be able
manage. And OOD, coupled with incremental development, is a good tool
for managing large change.

As an example, my associates and I are involved in a project which
totals roughly 300,000 lines of C++. This is not huge, but it is
respectable. Our client has, several times, made LARGE changes in
approach. However, our object-oriented design, and the fact that we
were delivering the system to him in small increments, allowed us to
keep up with his changes, and even make BIG changes with relative
efficiency.

--
Robert Martin | Design Consulting | Training courses offered:
Object Mentor Assoc.| rma...@oma.com | OOA/D, C++, Advanced OO

14616 N. Somerset Cr| Tel: (708) 918-1004 | Mgt. Overview of OOT

st...@projtech.com

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to

In article <RMARTIN.95...@rcm.oma.com>, <rma...@rcm.oma.com> writes:
Steve writes:
> > Consider an application domain that depends on an OODBMS domain.
> > Why is a grave flaw to build the OODBMS separately from the
> > application? Why will there be an integration nightmare?
>
> Have you ever worked with an OODB that has never before been tied to
> an application? I came close to this once. I was part of a project
> which was one of the first users of a major OODB about 4 years ago.
> The OODB designers had done a real nice job of designing what they
> *thought* application designers would need. But they were wrong about
> what we needed. The OODB had lots of bells and whistles that we just
> didn't need, and was missing some fundemental features that were
> extremely important. It was close to Hell.

But wasn't it deserved? You're telling me that you just let
these OODB people run off without any checking of what they
building? Aacck! One thing I'm sure we can agree on:
dependencies between domains must be managed and controlled
because they are the most dangerous if we get them wrong.

I would suggest that we need tools other than incremental
delivery to ensure that this situation does happen.

-- steve mellor


Robert C. Martin

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
In article <49j8gb$2...@news4.digex.net> e...@access1.digex.net (Ell) writes:

Robert C. Martin (rma...@oma.com) wrote:
: In article <49fk7a$j...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
:: I believe the biggest weakness in the software business today is
:: the lack of sufficient process, and understanding of process, to
:: figure out what you want to build *before* you call in the programmers to
:: a design and implement it.

: I have serious doubts as to whether such a process is even possible.
: Requirements have always been the least stable part of any project I
: have worked upon.

How does the fact that requirements change, make "process" not possible?
A good "process" fully takes changing requirements into account as a part
of the process. I take it you do not use any "process" including Booch?

Oh, Elliott. (Sigh.) I did not say that "process" was not possible. I
said that I did not believe it was possible for any process to "figure
out what you want to build *before* you callin the programmers to
design and implement it."

:...

: For those of us in this situation, up-front process is not the answer.
: Lots and lots of up front domain analysis is also not the answer. The
: answer lies in understanding how to build systems that can tolerate
: high levels of change.

Process is not just upfront, it just is.

Right. I was saying that "Up Front" process is not the
answer. i.e. doing lots of busywork up front trying to completely
define the product before we actually start designing and building it.

Robert C. Martin

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
In article <sourceDI...@netcom.com> sou...@netcom.com (David Harmon) writes:
In article <RMARTIN.95...@rcm.oma.com>,
Robert C. Martin <rma...@oma.com> wrote:
>
>Requirements have always been the least stable part of any project I
>have worked upon.

I hope this illustrates why some of us have a problem with your previous

use of the word "stable". Requirements are the thing upon which
everything else depends, and thus by your previous definition are
ultimately "stable". Yet here we see how false this can be.

Not at all. Many companies have failed *because* they could not
change their product specs to move with the market. When lots of code
depends upon a spec, that spec becomes hard to change, notwithstanding
my previous statement.

The whole idea is to isolate as much code as possible from the spec,
so that the bulk of the code does *not* depend upon the spec. Then,
the spec can be changed with relatively little pain.

The reason requirements are unstable is that they are tied to
money. And when the motivations of the industry change, the
requirements have to change too, and quickly. It does not matter how
hard the software is to change, the price will either be paid, or the
project will fail. Thus there is a huge impetus behind the changing
requirements, one that cannot be withstood.

Yet, a program that is overly dependent upon its requirements may
prove to be too inflexible to change as quickly as the requirements
do. And so the delivery *against* the requirements is stable by
comparison to the requirements themselves.

We may all
wish for requirements to be stable, but it doesn't make it so. Using
"stable" in that way is quite misleading; there must be some other word
that means what you mean.

Stability has a simple definition. A "state" is stable if it requires
effort to change that state. The more effort required, the more stable
the state. Software is stable if it requires great effort to change it.

Robert C. Martin

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
In article <DIusH...@ktibv.nl> r...@ktibv.nl (The Graphical Gnome) writes:
>In article <49fplo$n...@ixnews5.ix.netcom.com> ke...@ix.netcom.com (Kent Beck
) writes:
>
> Fairy-land is the only place you know what you're supposed to do before
> you start. I've seen three reactions to this- force the client to
> commit to something early and deliver a system they don't want, accept
> changes throughout the project and end up with a late, low quality
> system, or deliver the system in pieces, imagining the long term result
> but being prepared to change. This last is what works best for me, and
> (giving a brief nod to the title of this thread) what is suggested by
> SCRUM.
>
> Manage change.
>
>Well said.

Very well said. But not always practical.

I'm working on a automation department and all our projects are
inhouse projects. In this way it is impossible to manage
changes. We somtimes have to undo weeks of work, just because some
manager B desides to have it the other way around than desided by
manager A. And if you manager is afraid of B and not of A, your in
a hell of trouble.

Agreed, but this is exactly where the software engineers must manage
change. You must build your software so that it can be adapted to
the whims of these mangagers (no mean feat). You must deliver and
demonstrate *often* so that they don't have any excuses about being
surprised.

This does not happen once in a project, but about once every week.

Right. This is not much different from any other environment in which
I have worked. The requirements are always the least stable part of
any project.

Working in very small parts and adding those parts to the working
version is the only way these projects are manageble.

Right.

Piercarlo Grandi

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
>>> On Fri, 01 Dec 1995 13:47:33 GMT, jge...@omni.voicenet.com (James
>>> Gerber) said:

James> rma...@rcm.oma.com (Robert C. Martin) wrote:

rmartin> I would claim that most compaines are not getting the benefits
rmartin> promised by OO because they have chosen to use C++ rather than
rmartin> a true OO system (Smalltalk, Objective-C, to name a couple).

It is rather hard to imagine what you mean by "true OO" when you put
together Smalltalk and Objective-C and oppose them to C++. C++, like
Smalltalk and Smalltalk-NeXT (aka Smalltalk-C aka Objective-C :->) dies
support and (somewhat) enforce OO and the techniques that help
programming-in-the-large.

On the other hand I agree with the conclusion: but the main defect of
C++ is not that is not "true OO" whatever that means, but rather that it
is rather too complex and low level to be used in application programming
by application programmers.

James> "It is a poor workman that blames his tools"

This piety does not mean very much in programming, where tools, and in
particular notation, have a proven and large effect on both productivity
and quality.

James> Maybe people need a more robust system than Smalltalk, which
James> still clings to the obviously false idea that the world can be
James> modeled by single inheritance.

``Smalltalk'' is such a flexible moniker that it does cover even
languages with multiple inheritance...

Besides, inheritance (as most popularly defined) is not a technology
that helps model the world, but to organize an application's code
structure (code, not data -- which had better be organized by some
E-R/RM inspired technology). As such, limited as it is, it remarkably
effective (yet I have often railed that it is indeed _too_ limited).

To confuse the code structure of an application with a model of the
world is common and well published misconception, but a moment's
reflection should give at least an intuitive feeling that the two are
completley different concepts in different domains of discourse.

Fred Morris

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
In article <yf3loov...@sabi.demon.co.uk>, pier...@sabi.demon.co.uk
(Piercarlo Grandi) wrote:

> On the other hand I agree with the conclusion: but the main defect of
> C++ is not that is not "true OO" whatever that means, but rather that it
> is rather too complex and low level to be used in application programming
> by application programmers.

Could you graph that? 1 Guru Point for "bein' out there".

Bytesmiths

unread,
Dec 2, 1995, 3:00:00 AM12/2/95
to
jge...@omni.voicenet.com (James Gerber) writes:
"Maybe people need a more robust system than Smalltalk, which still
clings to the obviously false idea that the world can be modeled by single
inheritance."

Look up "Delegation is Inheritance." I think it was Ingalls & Borning,
OOPSLA 86 Proceedings, but it could have been from an earlier SIGPLAN. Q.
E. D.

The only people I ever hear this from are those who have not used
Smalltalk. Smalltalk-80v2 had MI -- it proved to be under-used and a
nuisance, and was ripped out. General delegation is at least as useful as
multiple inheritance, albeit not as efficient in its present form. Some
vendors (QKS comes to mind) have added efficient pre-dispatch actions,
which makes general delegation more than a match for MI.

The problem with MI is you usually multiply inherit a lot of garbage that
you don't want, whereas with delegation, you add in what you need. I don't
call MI "robust," I call it "scary!"

However, we are drifting from the thread into petty and useless language
wars. Getting back to the Subject: process is important and vital; I
suspect SCRUM is more a process than it admits. I agree that if you really
do not develop under a process, you cannot measure success, and you cannot
assure repeatedly good (or bad) results.

Jan

Jan Steinman <jan.byt...@acm.org>
Barbara Yates <barbara.b...@acm.org>
2002 Parkside Ct., West Linn, OR 97068, USA +1 503 657 7703

st...@projtech.com

unread,
Dec 3, 1995, 3:00:00 AM12/3/95
to

In article <49d874$q...@ixnews5.ix.netcom.com>, <ke...@ix.netcom.com> writes:
> In <NEWTNews.81706...@sallys.projtech.com>
> st...@projtech.com writes:
> >Paul's text above is indeed how we view it: The first step in the
> >Shlaer-Mellor method is to identify the layers, by identifying
> >subject matters (we call them problem domains), and depicting
> >the dependencies between the layers in a "domain chart".
> >
> >Then we analyze each domain (each layer) independently of one another.
> >...
> >I quote again the last three lines of the original post:
> >> > ........... Most development organizations still prefer a
> >> > pile-of-paper-based illusion of control to active management of
> >> > development risk.
> >
> >I suggest that a process that:
> >
> >1. Identifies the domains (layers) in the system, as the first step
> >2. Immediately moves to understand (and document) the dependencies
> > between the domains
> >3. Analyzes each domain separately and (relatively) independently
> >
> >is taking on active management of development risk, through explicit
> >management of the dependencies between the layers. Further, we
> >begin this very early on in the development process. Finally I say
> >for one last time, that such a process is not a waterfall.

> The worst risks in the world are integration risks, because you can't
> possibly find them until you see the whites of their eyes. If the
> domains are all well understood, such risk is minimized because you
> know where the landmines are buried. This is never the case for
> interesting software. Most competitive software developers are forced
> to innovate in several domains at once.
>
> I recommend to my clients that they use practices that maximize
> communication between developers of dependent layers-
> shoulder-to-shoulder development, episodic design (Ward Cunningham's
> phrase- see the Episode patterns in http://c2.com/ppr), group CRC
> review, and Grady's vertical slice-o-functionality.
>
> I agree with Robert Martin- integrate early, integrate often.

In many systems, even interesting ones, many domains _are_ understood.
I therefore continue to recommend that one should deliver the domains
closest to the machine (OS, Network) first, and then those close to
the outside world (user interface, process i/o.

There is value--in a limited number of cases--in delivering only
vertical portions of _some_ of these domains. For example, I might
put together a project plan that delivers _all_ of the OS and Network,
but only the Digital Input quarter of the PIO, so that the hardware
engineers may do their job. Or all the OS, simple IPC within a
single machine and the output-only display subsystem so I can
_show_ something. Certainly, I would deliver, initially, only
one (or in a push two) vertical slices of the application.

That said, your point is a very good one: Integration problems
are the worst. But there are two kinds of integration problem.
One is that caused by misunderstanding the dependencies between
domains (or layers, if you prefer). These problems are relatively
easy to manage: make explicit the assumption-requirement pairs
between layers. Of course, this does presume the existence
of these management skills. A problem that is missed--especially
a performance problem--can be a bear to fix. But it will be
localized, especially if using full translation capability so
that a change may be easily propagated throughout newly generated
code.

The second type is peer-to-peer communication between implemented
subsystems. For example, there may a misunderstanding between
the group working Trading vs that working Accounting. Each of these
problems is MUCH easier to fix (because each is purely local--
between an object in trading and an object in accounting, say), but
they tend to be more widespread and much less easy to find.
We have tried for years to address these problems by building
firewalls: "Let's just define the interface and let my Atlanta team
loose. Y'all in London just make sure you adhere to the interface."
Yeah, right. Excessive vertical-slice-o'-functionality deliveries
exacerbate this problem, not alleviate it. (My use of the word
"excessive" is intended exactly as stated. It is not an attempt
to shift the emotional focus of the reader.)

Today's received wisdom makes both iteration and incremental
delivery the "third rail" of software development--touch it
and die! After all, if your analysis tools are advisory
and your management skills weak, what options do you have?
The problem here is that THE SOLUTION VALIDATES THE PROBLEM.
The more you partition into vertical slices, the more
integration problems you have, so the more vertical slices
you make.

I submit that one should deliver lower (closest to the machine)
layers first. And the more uncertain of your understanding
of the _system_ that you are, the more you should attempt to
push upwards towards the application to deliver a vertical
slice-o'-functionality. If all you're doing is delivering
vertical slices, then this is an admission that you have no
idea what's going on in the system as a whole.

At the risk of agreeing with RCM, I say integrate early and
often--just make sure you do so from the bottom layer up.

-- steve mellor

Royce E. Mitchell III

unread,
Dec 3, 1995, 3:00:00 AM12/3/95
to
mbk@I_should_put_my_domain_in_etc_NNTP_INEWS_DOMAIN (Matthew B.
Kennel) wrote:

>By virtue of personal experience I think there are language features
>often, but not always, distinct from "object orientation" that make the
>programming process

> "more maintainable, more robust, and more reusable"

>If this were not so why even use C++? Because C++ 'supports' people
>programming object oriented systems better than C or Fortran. I think there
>are languages that do the same to C++.

>: There is nothing about any of the C++ environments that I am aware of
>: (and I am aware of a few) that prevent, interfere with, or even slightly
>: impede iterative development.

>I can think of one right away: the lack of static type inference, where the
>declared types of variables can depend on the result of other expressions.

>In a langauge which does do this (e.g. Eiffel & Sather for OO languages) you
>can change return types and attribute types of one object or subroutines
>under development and other dependent pieces of the system will often change
>automagically to make things fit and simultaneously retain full static type
>checking. It decreases the friction of change.

>This is not a trivial benefit.

Amen, Amen, Amen. I run into this problem time and time again, what
are the odds against it being fixed?

>The lack of GC means that you have to build in logic and facilities for
>various kinds of storage management too early in to your design. You end
>up fixating upon some representation too early. GC decreases the
>friction of changes.

>I didn't appreciate either of these until I wrote programs in a langauge
>which did them. I'd hate to go back.

Could you be a little more specific? Until this point I've sworn by
C++, but...

>: On the other hand, the C++ complile and


>: link loop can be longer than the Smalltalk edit and go loop. So quick
>: cycle hacking is easier in Smalltalk than in C++. (Unless you are using
>: a tool like Object Center which allows quick cycle hacking in C++).

Where can I get this Object Center?


AM Marston

unread,
Dec 3, 1995, 3:00:00 AM12/3/95
to
It's fun to watch this thread. I personally think there is a difference
between the average analyist and the average programmer. The
point is no organization can build a process that relys on the
few brilliant people who can combine all those skills. Accept
reality and make the best use of the talent you have.
It seems that few of you have been the customer trying to
tell an engineer what you want. Try it some time, it's not as
easy as you might think. The customers world is a very complex
place quite often. It is impossible to think of every requirement
that the system has, just look at the bulk of documents that
are created to describe business processes. Now try and be
consistent over several years describing hundreds or thousands
of requirements. Document them you say, it usually works but
I have had cases where the following year I look at a requirement
and must force myself to say - stop, there must be a reason I
said that in the first place, I'm not stupid, I must have had
a good reason to say that in the first place.

So accept human frailty, recognize that mistakes will be made
on all sides, organize yourself to allow change and get on with
the job. The customer will change requirements and the engineers
will build the wrong things. Live with it.

Tony


Ell

unread,
Dec 3, 1995, 3:00:00 AM12/3/95
to
s6.rmc.ca>
Organization: The Universe
Distribution: world

AM Marston (mar...@rmc.ca) wrote:
:...
: So accept human frailty, recognize that mistakes will be made


: on all sides, organize yourself to allow change and get on with
: the job. The customer will change requirements and the engineers
: will build the wrong things. Live with it.

For large, and, or complex projects, there is evidence that good process
helps us to effectively take what you point out into account.

Also changes occur not only because of mistakes, but due to the natural
learning process.

Elliott

Ell

unread,
Dec 3, 1995, 3:00:00 AM12/3/95
to
Robert C. Martin (rma...@oma.com) wrote:
: In article <sourceDI...@netcom.com> sou...@netcom.com (David Harmon)
: writes:
: In article <RMARTIN.95...@rcm.oma.com>,
: Robert C. Martin <rma...@oma.com> wrote:
: >
: >Requirements have always been the least stable part of any project I
: >have worked upon.

: I hope this illustrates why some of us have a problem with your previous
: use of the word "stable". Requirements are the thing upon which
: everything else depends, and thus by your previous definition are
: ultimately "stable". Yet here we see how false this can be.

: Not at all. Many companies have failed *because* they could not
: change their product specs to move with the market. When lots of code

: depends upon a spec, that spec becomes hard to change,...

But while lots of code may depend upon former spec, the spec _may_ be
changing in ways not congruent with existing code. Then it has to be
assessed whether or not spec changes are within changes allowed for that
time.

: The whole idea is to isolate as much code as possible from the spec,


: so that the bulk of the code does *not* depend upon the spec. Then,
: the spec can be changed with relatively little pain.

The bulk of the code may not depend on a particular spec, but the bulk of
the code should depend on all specs taken as a whole.

: The reason requirements are unstable is that they are tied to


: money. And when the motivations of the industry change, the
: requirements have to change too, and quickly. It does not matter how
: hard the software is to change, the price will either be paid, or the
: project will fail. Thus there is a huge impetus behind the changing
: requirements, one that cannot be withstood.

But it must eventually be withstood, money behind it, or not. That is if
we want to serve user needs.

: Yet, a program that is overly dependent upon its requirements may


: prove to be too inflexible to change as quickly as the requirements
: do. And so the delivery *against* the requirements is stable by
: comparison to the requirements themselves.

:...

Yes, once a house is 3/4's built we can't drop most major aspects. The
users will then have to put up with what they got, unless they want to pay
significantly. But if we been have carrying out "process" correctly we
should be on track by the 3/4's mark. At points earlier if we have been
operating the process well, and if the users aren't presenting "radically"
different reqs, we should generally be able to make changes in the
program.

Elliott

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <49qjgp$p...@newsbf02.news.aol.com> bytes...@aol.com (Bytesmiths) writes:


The problem with MI is you usually multiply inherit a lot of garbage that
you don't want, whereas with delegation, you add in what you need. I don't
call MI "robust," I call it "scary!"

In static language like C++ or Eiffel, MI is very useful, and not at
all scarY. In dynamically typed language like Smalltalk and Obj-C MI
is not nearly as useful, but is still not scary.

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <yf3loov...@sabi.demon.co.uk> pier...@sabi.demon.co.uk (Piercarlo Grandi) writes:

rmartin> I would claim that most compaines are not getting the benefits
rmartin> promised by OO because they have chosen to use C++ rather than
rmartin> a true OO system (Smalltalk, Objective-C, to name a couple).


Please watch your attributions. Those of you who read my postings
know that I could not have written the above.

On the other hand I agree with the conclusion: but the main defect of
C++ is not that is not "true OO" whatever that means, but rather that it
is rather too complex and low level to be used in application programming
by application programmers.

Bah. We use it for applications all the time. It is just a language.

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <NEWTNews.81793...@sallys.projtech.com> st...@projtech.com writes:
In article <RMARTIN.95...@rcm.oma.com>, <rma...@rcm.oma.com> writes:
Steve writes:
> > Consider an application domain that depends on an OODBMS domain.
> > Why is a grave flaw to build the OODBMS separately from the
> > application? Why will there be an integration nightmare?
>
> Have you ever worked with an OODB that has never before been tied to
> an application? I came close to this once. I was part of a project
> which was one of the first users of a major OODB about 4 years ago.
> The OODB designers had done a real nice job of designing what they
> *thought* application designers would need. But they were wrong about
> what we needed. The OODB had lots of bells and whistles that we just
> didn't need, and was missing some fundemental features that were
> extremely important. It was close to Hell.

But wasn't it deserved? You're telling me that you just let
these OODB people run off without any checking of what they
building? Aacck!

No, I am saying that we bought a third party OODB. And, at the time
that we were purchasing it, it looked pretty good. We didn't know
that we needed the features that it didn't have until the project was
well along.

Now, you might shrug and say that we simply hadn't analyzed our
product well enough. That we *should* have known what features we
needed *before* we made the decision to buy the OODB. To that I say
Phaaaa! Until we actually started using the product, we could not
have known its dynamic constraints. Nor could we have known what the
our own resource needs were. The only way to learn was to try it.

One thing I'm sure we can agree on:
dependencies between domains must be managed and controlled
because they are the most dangerous if we get them wrong.

Yes, on that point we agree.

I would suggest that we need tools other than incremental
delivery to ensure that this situation does happen.

Agreed as well. The more tools to help with this, the better.

Patrick D. Logan

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
jge...@omni.voicenet.com (James Gerber) wrote:

>rma...@rcm.oma.com (Robert C. Martin) DID NOT:
>
>> I would claim that most compaines are not getting the benefits promised by
>> OO because they have chosen to use C++ rather than a true OO system

>> (Smalltalk, Objective-C, to name a couple).
>

>"It is a poor workman that blames his tools"
>

>Maybe people need a more robust system than Smalltalk, which still clings to
>the obviously false idea that the world can be modeled by single inheritance.

Two points of correction:

* If James Gerber knew anything about Robert Martin, he would know that the
quote above could never have originated from RCM!

* Smalltalk is not limited to Single Inheritance for "modelling the world."
Any class in Smalltalk can subscribe to any number of interfaces, including
the interfaces inherited from its superclass. This is standard operating
procedure in Smalltalk.

It is foolish to talk about things that one is ignorant of.

--
mailto:Patrick...@ccm.jf.intel.com
(503) 264-9309, FAX: (503) 264-3375

"Poor design is a major culprit in the software crisis...
..Beyond the tenets of structured programming, few accepted...
standards stipulate what software systems should be like [in] detail..."
-IEEE Computer, August 1995, Bruce W. Weide

James Gerber

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
pier...@sabi.demon.co.uk (Piercarlo Grandi) wrote:

>To confuse the code structure of an application with a model of the
>world is common and well published misconception, but a moment's
>reflection should give at least an intuitive feeling that the two are
>completley different concepts in different domains of discourse.

I totally disagree. To me, one of the most important benefits ofr -O is that
it allows the code structure of the program to be close to the "real-World"
since the real world is O-O. The more the code of a program diverges from the
real-world, the more likely it is that the programmer will make a mistake in
translating between the two and the less productive the process of programming
will be.

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <49t6nj$4...@news4.digex.net> e...@access1.digex.net (Ell) writes:

AM Marston (mar...@rmc.ca) wrote:
:...
: So accept human frailty, recognize that mistakes will be made
: on all sides, organize yourself to allow change and get on with
: the job. The customer will change requirements and the engineers
: will build the wrong things. Live with it.

For large, and, or complex projects, there is evidence that good process
helps us to effectively take what you point out into account.

Also changes occur not only because of mistakes, but due to the natural
learning process.

Agreed on both counts. However there is another, more important,
reason for change; and that is that the world changes daily.
Requirements change. When they do, it does not necessarily mean that
the old requirements were wrong. Instead, it usually means that the
world has changed.

Ell

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
Robert C. Martin (rma...@oma.com) wrote:
:...
: Oh, Elliott. (Sigh.) I did not say that "process" was not possible. I

: said that I did not believe it was possible for any process to "figure
: out what you want to build *before* you callin the programmers to
: design and implement it."

In fact, I disagree that it is "impossible for any process to 'figure out
what you want to build *before* you call [in] the programmers to design
and implement it.'" Often it is possible to do so with analysts _before_
the "programmers" are "called in". But even where it is not possible to
do so fully before the programmers are called in, which means the
"programmer's feedback helps us "figure it out", process generally sets the
basis for such beneficial feedback.

(RM)
: :...

: : For those of us in this situation, up-front process is not the answer.
: : Lots and lots of up front domain analysis is also not the answer. The
: : answer lies in understanding how to build systems that can tolerate
: : high levels of change.

: Process is not just upfront, it just is.

: Right. I was saying that "Up Front" process is not the
: answer. i.e. doing lots of busywork up front trying to completely
: define the product before we actually start designing and building it.

When I said "it just is", I also meant that "process" is more than
attempting to do as much as possible as early on in "process" as we can.
Process, in a large part, encompasses making changes due to feedback from
Design and Implementation.

Elliott

Patrick D. Logan

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
st...@projtech.com wrote:

>At the risk of agreeing with RCM, I say integrate early and
>often--just make sure you do so from the bottom layer up.

I look at risk management in two dimensions:

1. The risk of providing the correct behavior.
2. The risk of any technology used to implement those behaviors.

Therefore:

A. Prioritize the known required behaviors.

B. Prioritize the proposed technologies according to which behaviors
they will be applied and the certainty of the use of the technology.

C. Prioritize resources (people, tools, money, etc.) toward both risks
as best meets the overall risk:

i. Prioritize toward verticle prototypes of the most important
behaviors. Make them as independent of any specific technologies
as possible.

ii. Prioritize toward horizontal prototypes of the most critical
technologies: i.e. the ones used in the most important behaviors
that are also the least understood.

iii. As horizontal prototypes prove the technologies, plug them into
the verticle prototypes to prove their combination.

Simon John Shurville

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
I am writing an application that will contain several sets which need
to communicate with one another. Instead of using a global variable to
contain the model I would like to use a singleton (as described by
Gamma et al), i.e., a one-off object that is easy for other objects to
find. This seems to lower the amount of hard coding that would link
sets together via a global variable name (although I imagine that the
class name would need to be hard wired).

I can see that this should be easy to do in smalltalk; but I wonder
what your collective standards and norms are? If, for example, I create
some sets as class variables and then set up relationships between them
that are mediated via the singleton would other programmers swear under
their breath during maintenance? Is there a more standard way to do
this that I have not yet come across? Your opinions would be very
welcome.


Cathy Mancus

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to

In article <RMARTIN.95...@rcm.oma.com>, rma...@oma.com (Robert C. Martin) writes:
> In article <49hs6g$i...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:
> There seems to be an attitude in industry that good analyst == good
> designer == good implementor. It just ain't so; analysis requires
> very different skills from programming and should be done by people
> who possess those skills.

> There is truth to this. There are those who specialize in analysis, or
> design, or implementation; and can do one better than the others.
> However, I have found that the *best* analysts are also the *best*
> designers and *best* implementors.

But there aren't enough "best" people to go around. Many projects
must struggle along with _competent_ people who are definitely not the
_best_ people around.

Dom De Vitto

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
Robert C. Martin (rma...@oma.com) wrote:
: In article <49fplo$n...@ixnews5.ix.netcom.com> ke...@ix.netcom.com (Kent Beck ) writes:
: Fairy-land is the only place you know what you're supposed to do before
: you start. I've seen three reactions to this- force the client to
: commit to something early and deliver a system they don't want, accept
: changes throughout the project and end up with a late, low quality
: system, or deliver the system in pieces, imagining the long term result
: but being prepared to change. This last is what works best for me, and
: (giving a brief nod to the title of this thread) what is suggested by
: SCRUM.
: Manage change.

: Well said.

Here is my $0.02:

'incremental developement' is pretty much the only way to develop a system
nowerdays, at least a system of much complexity or size.

The difficulty is not just spec'ing what you want, but spec'ing what the user
*may want in the future*.

It's loss of this sort of vital information that causes systems to have
plasticity and elasticity in the wrong places, and causes delivered systems
to be a poor fit for the problem. Often this is exasperated because the problem
drifts.

As above "Manage change", but also "analyse change" and "design for change where
you can".

Dom
--
_____________________________________________________________________________
Dom De Vitto dev...@ferndown.ate.slb.com
Schlumberger Automatic Test Equipment f...@bcs.org.uk
Board Systems Desk/voicemail: +44(0) 1202 850951
Wimborne, Dorset, Site reception: +44(0) 1202 850850
England, BH21 7PP Fax: +44(0) 1202 850988
_____________________________________________________________________________

Cathy Mancus

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to

In article <RMARTIN.95...@rcm.oma.com>, rma...@oma.com (Robert C. Martin) writes:
> In my organization, for example, I hire only
> those engineers who exhibit strong ability in analysis, design and
> implementation. It is a really top notch crew. As a result, four of
> our people have been able to write an extremely complex 300,000 line
> application in about 18 months. Moreover, we have achieved
> approximately 80% reuse and have developed on OO framework that is
> extremely resilient to the frequent changes made by the customer.
> This, based upon very terse specs that seldom survived the design and
> implementation.

How hard was it to find those four people? How much above the market
rate are you paying? When someone else hires one or more of those people away
with a better deal -- after all, those people would be just as beneficial
to another person's project -- are you equipped to continue to meet your
deadlines?

Piercarlo Grandi

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
>>> On Sat, 02 Dec 1995 17:59:16 -0800, m3...@halcyon.com (Fred Morris) said:

Fred> In article <yf3loov...@sabi.demon.co.uk>, pier...@sabi.demon.co.uk
Fred> (Piercarlo Grandi) wrote:

pcg> On the other hand I agree with the conclusion: but the main defect
pcg> of C++ is not that is not "true OO" whatever that means, but rather
pcg> that it is rather too complex and low level to be used in
pcg> application programming by application programmers.

Fred> Could you graph that? 1 Guru Point for "bein' out there".

Graph? I think Adventure's Bottomless Pit would be the best
image. :-)

Igor Chudov

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
Robert C. Martin (rma...@oma.com) wrote:
* There seems to be an attitude in industry that good analyst == good
* designer == good implementor. It just ain't so; analysis requires
* very different skills from programming and should be done by people
* who possess those skills.
*
* There is truth to this. There are those who specialize in analysis, or
* design, or implementation; and can do one better than the others.
* However, I have found that the *best* analysts are also the *best*
* designers and *best* implementors.

_The Bell Curve_ asserts that job success directly depends on IQ. So it is
no surprise that if a smart person excels at one activity, she would excel
at another, too.

My experience confirms your observation.

--
- Igor. (My opinions only) http://www.algebra.com/~ichudov/index.html
For public PGP key, finger me or send email with Subject "send pgp key"

You know you have achieved perfection in design, not when you have nothing
more to add, but when you have nothing more to take away.
- Antoine de Saint Exupery.

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <49sk2h$h...@cs6.rmc.ca> mar...@rmc.ca (AM Marston) writes:

It's fun to watch this thread. I personally think there is a difference
between the average analyist and the average programmer. The
point is no organization can build a process that relys on the
few brilliant people who can combine all those skills.

Many can't, but some do. In my organization, for example, I hire only


those engineers who exhibit strong ability in analysis, design and
implementation. It is a really top notch crew. As a result, four of
our people have been able to write an extremely complex 300,000 line
application in about 18 months. Moreover, we have achieved
approximately 80% reuse and have developed on OO framework that is
extremely resilient to the frequent changes made by the customer.
This, based upon very terse specs that seldom survived the design and
implementation.

I suppose I shouldn't brag about them, but they make me proud.

Robert C. Martin

unread,
Dec 4, 1995, 3:00:00 AM12/4/95
to
In article <49tb0a$4...@news4.digex.net> e...@access1.digex.net (Ell) writes:

: The reason requirements are unstable is that they are tied to
: money. And when the motivations of the industry change, the
: requirements have to change too, and quickly. It does not matter how
: hard the software is to change, the price will either be paid, or the
: project will fail. Thus there is a huge impetus behind the changing
: requirements, one that cannot be withstood.

But it must eventually be withstood, money behind it, or not. That is if
we want to serve user needs.

Huh? It is the user that is demanding the changes. How can you serve
the user if you "withstand" his demands?

Ell

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
Robert C. Martin (rma...@oma.com) wrote:
: In article <49t6nj$4...@news4.digex.net> e...@access1.digex.net (Ell) writes:
:
: AM Marston (mar...@rmc.ca) wrote:
: :...
: : So accept human frailty, recognize that mistakes will be made
: : on all sides, organize yourself to allow change and get on with
: : the job. The customer will change requirements and the engineers
: : will build the wrong things. Live with it.
:
: For large, and, or complex projects, there is evidence that good process
: helps us to effectively take what you point out into account.
:
: Also changes occur not only because of mistakes, but due to the natural
: learning process.
:
: Agreed on both counts. However there is another, more important,
: reason for change; and that is that the world changes daily.
: Requirements change. When they do, it does not necessarily mean that
: the old requirements were wrong. Instead, it usually means that the
: world has changed.

And we should adopt the design to that new logical situation, if at all
practicable. With good process and design, we should be able to do so in
most cases.

Elliott

Patrick D. Logan

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
mbk@I_should_put_my_domain_in_etc_NNTP_INEWS_DOMAIN (Matthew B. Kennel) wrote:

>Can we summarize:
>
> "well managed dependencies" often means 'get rid of two way relationships
> as much as possible'.

In my experience, the most common kind of mismanaged dependency is
"overcommitment". That is a dependency that depends on more than
the minimum amount of information necessary to accomplish a task.

For example, in distributed software, say a client/server application,
if some object is sending some bits over the wire, that wire
should be abstracted from the lower level details of, say, TCP or
Netware. Otherwise it is depending on more information than is necessary.
If any of the superfluous information changes, then the object sending
the bits will have to change too. And so will every other object using
that information. If the details had been abstracted away, only the
implementation of the abstraction would have to change. Each object
would remain unchanged.

There are many ways to overcommit in software. Sometimes it seems to me
that's most of what we do!

Piercarlo Grandi

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
>>> On Mon, 04 Dec 1995 20:12:41 GMT, jge...@omni.voicenet.com (James
>>> Gerber) said:

James> pier...@sabi.demon.co.uk (Piercarlo Grandi) wrote:

pcg> To confuse the code structure of an application with a model of the
pcg> world is common and well published misconception, but a moment's
pcg> reflection should give at least an intuitive feeling that the two
pcg> are completley different concepts in different domains of
pcg> discourse.

James> I totally disagree. To me, one of the most important benefits
James> ofr -O is that it allows the code structure of the program to be

Also note that I am talking of _code_ structure. What you say later
_may_ have some remote semblance of sense if we were talking of the
_data_ structure of a program: after all the way _data_ is organized is
supposed to reflect the semantics of the problem at hand, which is often
a model, whether of the real world or not is anybody's guess.

But there is no obvious reason to believe that the _code_ structure of a
program needs to reflect the shape of any model of anything -- it should
reflect what is convenient for programming, one would naively imagine.

James> close to the "real-World" since the real world is O-O.

And here we go again -- you can say this if you believe that you are the
first person in the history of philosophical and scientific thought that
can define precisely the expressions 'the real world' and 'the real world
is ...'.

Congratulations!

We are all waiting for the launch of your forthcoming book. But rush:
somebody else in this newsgroup is already working on exactly the same
subject.

Steve Hayes

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
David N. Smith <dns...@watson.ibm.com> wrote:

>In article <49hs6g$i...@brtph500.bnr.ca> Cathy Mancus, man...@bnr.ca
>writes:
>> This works well if the customer knows more or less what he wants and
>>has it documented, but continues to pass down small changes during the
>>life cycle of the project. The key word is SMALL, as in "does not
>>require redesigning the whole product".

>As I read this I was reminded of a book called, I think, House, by Tracy
>Kidder. I actually read a long excerpt in The Atlantic, but I'm certain
>there was a book too.

>It's about a small group of designer/builders who do custom houses and a
>house that they do for a couple. It's the spec changes that happen as the
>house is being built, and reactions to them, that make the story.

There is definitely a book. It's a pretty good read. The paperback
copy is from Avon Books (1985). The ISBN was 0-380-70176-6. Probably
out of print.

Also try "The Soul of a new Machine" from the same author. It's about
the struggle to build a new PC - I think it was the Eagle from Data
General (?).

Steve Hayes


Robert C. Martin

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
In article <49vge8$2...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:

In article <RMARTIN.95...@rcm.oma.com>, rma...@oma.com (Robert C. Martin) writes:

> In my organization, for example, I hire only
> those engineers who exhibit strong ability in analysis, design and
> implementation. It is a really top notch crew. As a result, four of
> our people have been able to write an extremely complex 300,000 line
> application in about 18 months. Moreover, we have achieved
> approximately 80% reuse and have developed on OO framework that is
> extremely resilient to the frequent changes made by the customer.
> This, based upon very terse specs that seldom survived the design and
> implementation.

How hard was it to find those four people?

There are six associates. Four worked on the project above. Of the
six, I have known them all for years. Some of them are former
co-workers, and others I knew through correspondence. So, to answer
your question, it was hard. And it will be hard to grow the
organization, I will be very very selective about hiring people. On
the other hand, I have not exhausted the pool of potential associates.
And I am meeting more all the time.

How much above the market rate are you paying?

In a public forum, it is difficult to go into details. I pay very
well. And there are other compensations too. For example, we all
work from our homes. We telecommute. And we live wherever in the
world we want. The is an associate in Tucson, and another in New
York, and we are looking to recruit one in the UK. Some of us travel,
and some remain at home to work on projects. We have little or no
management structure, we are *all* software engineers. So meetings
are few and far between. Also, the work is very challenging, and
there is an opportunity to learn a great deal.

This environment isn't for everyone, but it works well for us.

When someone else hires one or more of those people away
with a better deal -- after all, those people would be just as beneficial
to another person's project -- are you equipped to continue to meet your
deadlines?

So far, this has not happened. It is difficult for companies to hire
them away. The more likely scenario is that an associate will strike
off on his/her own. This has happened once. In that case the
associate was very cooperative about the transition, and still does
work for us at need.

As I said, I have not exhausted the potential pool of associates, so
if one were to leave, I could find another. If three were to leave, I
might face some difficulties.

----------

Having said all that, I agree with your point. A large software firm
is going to find it impossible to do what I have done. However, that
should not preclude them from hiring well and paying well. In the
long run it is cheaper to hire 100 top notch engineers at 120K than it
is to hire 300 mediocre engineers at 40K. They will get more done in
less time and with fewer errors.

Is there a place for hiring junior engineers? Of course. But only
under the direct and close supervision of really experienced
engineers. It seems remarkable to me that very few companies do this.
Instead they hire kids right out of school, toss a spec at them, and
say: "OK, code this.". No real mentoring, no real guidance, no reall
support for learning the tough lessons ahead. This is a shame.


--
Robert Martin | Design Consulting | Training courses offered:
Object Mentor Assoc.| rma...@oma.com | OOA/D, C++, Advanced OO

14619 N. Somerset Cr| Tel: (708) 918-1004 | Mgt. Overview of OOT

Cathy Mancus

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to

In article <RMARTIN.95...@rcm.oma.com>, rma...@oma.com (Robert C. Martin) writes:
> In article <49vge8$2...@brtph500.bnr.ca> man...@bnr.ca (Cathy Mancus) writes:

>> How hard was it to find those four people?

> to answer your question, it was hard. And it will be hard to grow the

> organization.



>> How much above the market rate are you paying?

> In a public forum, it is difficult to go into details. I pay very
> well. And there are other compensations too. For example, we all
> work from our homes. We telecommute. And we live wherever in the
> world we want. The is an associate in Tucson, and another in New
> York, and we are looking to recruit one in the UK. Some of us travel,
> and some remain at home to work on projects. We have little or no
> management structure, we are *all* software engineers. So meetings
> are few and far between. Also, the work is very challenging, and
> there is an opportunity to learn a great deal.

Just a general comment to the world in general; with a deal like
that, you will have no problem getting any number of engineers, and
you can afford to be choosy. Many of us are disgusted at getting
essentially the same offer over and over; 2 weeks vacation (the third
week is always far enough down the road that you know you'll be
gone before then), no telecommuting, work locations usually in high-traffic
areas, mediocre (and declining) health benefits, tiny cubes, etc, etc.
In short, if you want people, improve your work environment. To
some of us, that's more important than a large raise.



> This environment isn't for everyone, but it works well for us.

It would work well for many of us who haven't been offered it, I'm sure.

> ----------



> In the long run it is cheaper to hire 100 top notch engineers at 120K than it
> is to hire 300 mediocre engineers at 40K. They will get more done in
> less time and with fewer errors.

Personally, I find your other perks more appealing than more salary
would be.



> Instead they hire kids right out of school, toss a spec at them, and
> say: "OK, code this.". No real mentoring, no real guidance, no reall
> support for learning the tough lessons ahead. This is a shame.

Yes, this is all too common a pattern. Companies are reluctant
to make a long-term investment in their people. This has become
self-perpetuating; since companies now expect people to leave in
3 years or so, they don't want to invest in them.

Todd Knarr

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
In <RMARTIN.95...@rcm.oma.com>, rma...@oma.com (Robert C. Martin) writes:
>Huh? It is the user that is demanding the changes. How can you serve
>the user if you "withstand" his demands?

If the spec keeps changing, you can by definition never deliver code that
meets the spec. At some point you have to draw the line and freeze the spec
so code can be written to and tested against it. Otherwise you are trying
to hit not only a moving target but one that knows where you're aiming and
moves to avoid that spot after you've fired.

The trick is to have the changes be a converging series, so that when the
changes are major you don't have tons of code committed and by the time
you've got 90% of the code done the changes are relatively minor. Good
design helps by letting you classify more changes as "relatively minor".

--
Todd Knarr : tkn...@xmission.com | finger for PGP public key
| Member, USENET Cabal

Seriously, I don't want to die just yet. I don't care how
good-looking they are, I! don't! want! to! die!"
-- Megazone ( UF1 )


Bytesmiths

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
"The experimental MI system never made it into 'Smalltalk-8x (for x >
0)'"

One would not think so if one's only exposure was through books, wouldn't
one?

One could with equal authority write: "MS-DOS 2.x (for x > 0) had no such
thing as Terminate and Stay Resident," but it's harder to refute millions
of DOS programmers who know better than it is to refute a handfull of
Smalltalk programmers who know better.

A description of a thing is NOT that thing! (When it comes to software, it
is rarely even close! :-)

Jan

Jan Steinman <jan.byt...@acm.org>
Barbara Yates <barbara.b...@acm.org>
2002 Parkside Ct., West Linn, OR 97068, USA +1 503 657 7703

John Greve

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
In article <RMARTIN.95...@rcm.oma.com>,

rma...@oma.com (Robert C. Martin) wrote:
>In article <49tb0a$4...@news4.digex.net> e...@access1.digex.net (Ell) writes:
>
> : The reason requirements are unstable is that they are tied to
> : money. And when the motivations of the industry change, the
> : requirements have to change too, and quickly. It does not matter how
> : hard the software is to change, the price will either be paid, or the
> : project will fail. Thus there is a huge impetus behind the changing
> : requirements, one that cannot be withstood.
>
> But it must eventually be withstood, money behind it, or not. That is if
> we want to serve user needs.
>
>Huh? It is the user that is demanding the changes. How can you serve
>the user if you "withstand" his demands?
It works for Microsoft. :-)

John Greve
jhg...@epx.cis.umn.edu

Ell

unread,
Dec 5, 1995, 3:00:00 AM12/5/95
to
Robert C. Martin (rma...@oma.com) wrote:
: In article <49tb0a$4...@news4.digex.net> e...@access1.digex.net (Ell) writes:

: : The reason requirements are unstable is that they are tied to
: : money. And when the motivations of the industry change, the
: : requirements have to change too, and quickly. It does not matter how
: : hard the software is to change, the price will either be paid, or the
: : project will fail. Thus there is a huge impetus behind the changing
: : requirements, one that cannot be withstood.

Me - Elliott - _did not_ write the above! I _responded to_ something
couched in "similar syntax" to the above (not meaning) below.

: But it must eventually be withstood, money behind it, or not. That is if

: we want to serve user needs.

(RM)
: Huh? It is the user that is demanding the changes. How can you serve


: the user if you "withstand" his demands?

My remembrance of what I responded to at the time was that the syntax
implied that there is a huge impetus to "not change a module" (the module
"withstands") because of physical dependencies on it. That - your design
principle - I disagree with in most cases.

Elliott

Alan Lovejoy

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
In <vwjzqd8...@osfb.aber.ac.uk> p...@aber.ac.uk (Piercarlo Grandi)
writes:
>Perhaps it's time that something like a SmliPlace startup took Self
from
>the bosom of Sun Labs.

I wonder what David Ungar's thoughts are on Java?

--Alan (just wondering out loud) Lovejoy

Robert C. Martin

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
In article <49v9lr$b...@news.jf.intel.com> "Patrick D. Logan" <patrick...@ccm.jf.intel.com> writes:


i. Prioritize toward verticle prototypes of the most important
behaviors. Make them as independent of any specific technologies
as possible.

ii. Prioritize toward horizontal prototypes of the most critical
technologies: i.e. the ones used in the most important behaviors
that are also the least understood.

iii. As horizontal prototypes prove the technologies, plug them into
the verticle prototypes to prove their combination.


There is profound wisdom here. It is useless to argue for "only
vertical slices" or "only horizontal slices". Both have their place,
as Patrick as so easily shown. Vertical slices help us to manage the
risk that the spec is wrong. Horizontal slices help us to manage the
risk that we don't understand our technology.

Matthew B. Kennel

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
Bytesmiths (bytes...@aol.com) wrote:
: jge...@omni.voicenet.com (James Gerber) writes:
: "Maybe people need a more robust system than Smalltalk, which still

: clings to the obviously false idea that the world can be modeled by single
: inheritance."

: Look up "Delegation is Inheritance." I think it was Ingalls & Borning,
: OOPSLA 86 Proceedings, but it could have been from an earlier SIGPLAN. Q.
: E. D.

: The only people I ever hear this from are those who have not used
: Smalltalk. Smalltalk-80v2 had MI -- it proved to be under-used and a
: nuisance, and was ripped out. General delegation is at least as useful as
: multiple inheritance, albeit not as efficient in its present form. Some
: vendors (QKS comes to mind) have added efficient pre-dispatch actions,
: which makes general delegation more than a match for MI.

: The problem with MI is you usually multiply inherit a lot of garbage that
: you don't want, whereas with delegation, you add in what you need. I don't
: call MI "robust," I call it "scary!"

But some particular languages that I use, you can 'multiply inherit'
precisely what you need from your implementation-parents, and delete
the remainder.

When you delegate, at least in the dumb way that I understand it, you
presuppose that there is "an object of the parent's type" hidden
inside me.

But what if that isn't the case? W

What if I want to make a new class whose behavior is

My_new_class = F(old_class,new_stuff)

and not

My_new_class = old_stuff + new_stuff

For example, suppose I want to *delete* a data attribute from "old_class"?

Suppose old_class has function 'a' which internally calls 'b'. With
delegation, how do I override 'b' and not 'a' so that the old-class's
'a' will now call my new 'matt-b', but only for instances of the new
class.

In the model of implementation inheritance that I'm used to, I am making
a new class which *uses* part of the implementation of other, existing
classes. I am not necessarily making a new object which has shells of old
object inside it. There is no reason that the new object should necessarily
have to be substitutable for its implementation-parents.

Or do I not get the point?

: However, we are drifting from the thread into petty and useless language
: wars.

Language 'wars' may be useless or useful depending on the merit
of the topic.

: Jan Steinman <jan.byt...@acm.org>


: Barbara Yates <barbara.b...@acm.org>
: 2002 Parkside Ct., West Linn, OR 97068, USA +1 503 657 7703

BTW, I don't have C++ in mind.

cheers
Matt

Christopher Barber

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
>>>>> "IC" == Igor Chudov <ich...@espcbw.stat.ncsu.edu> writes:

IC> _The Bell Curve_ asserts that job success directly depends on
IC> IQ. So it is no surprise that if a smart person excels at one
IC> activity, she would excel at another, too.

This is getting off topic, but what the hell....

IMHO, _The_Bell_Curve_ is mostly crap since it is based on the
concept that "intelligence" can be measured in a single monotonic
variable (the IQ). Of course, this is absurd. In fact, two people
with the same IQ could have very different intellectual skills.
I have a friend who is a brilliant musicologist, a superb teacher,
an excellent writer, and an innovative scholar, but she always had
a tough time with math and science in college and I dare say that
she would make a lousy programmer.

However, within the realm of software I would have to agree that
there is an extremely high overlap between those that will make good
designers and those who will make good implementers.

- Chris


--
Christopher Barber Software Engineer BBN, Systems and Technologies
mailto:cba...@bbn.com http://malachite.bbn.com/~cbarber (within bbn)

Phil Brooks

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
Robert C. Martin (rma...@oma.com) wrote:
: Stability has a simple definition. A "state" is stable if it requires
: effort to change that state. The more effort required, the more stable
: the state. Software is stable if it requires great effort to change it.

Actually the mechanical definition of a "stable system" is that when a
force exerted on it it tends to return to its original state. The example
would be a clown doll with a big weight in the bottom. Punch it and it
tends to return to its stable (upright) state.

The unstable system then tends to continue changing after a force is exterted.
An example is a coin balanced on edge. Any lateral force will tend to cause
the coin to fall.

I think this analogy works well for software, but I come up with a different
impression of what stable software is as a result:

Stable software is software that can be changed (force applied) and will return
to its original state with regard to operational integrity and quality. No big
changes in interface or quality as a result of a change, no massive internal
ripple effect as a result of a change are expected in a stable software
system.

Unstable software, on the other hand, is software in which small changes
tend to produce large effects in either operation or quality.

That means to me that "unstable" software requires a great effort to change it
assuming one wishes to return to the original state of quality and operational
integrity. stable software is much easier to change in that changes tend not
to have unforseen impacts on client source code, quality, or operational
integrity.

--
Phil Brooks, (phil_...@mentorg.com)
Regular Mail:
Mentor Graphics Corporation, 8005 SW Boeckman Road, Wilsonville, OR 97070
Receiving address (Fed Ex, UPS, etc):
Mentor Graphics Corporation, 27788 SW Parkway Avenue, Wilsonville, OR 97070
(503)685-1324, FAX (503)685-7839

AM Marston

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
Whew! Glad to see discussion on this. I wish I could work
in Robert's organization, but I think there are many companies
were there are hundreds of people working. As Cathy points
out they can't all be brilliant, but they can all contribute.
Yes, the world changes, I think more people are recognizing
that changes to the system themselves cause requirements
to change, it's a snowball effect that for some systems
makes it unrealistic and unresonable to aim for perfection
and stability. Thanks to all for your comments.

Tony


Michael Elizabeth Chastain

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
In article <49vge8$2...@brtph500.bnr.ca>,
Cathy Mancus <ca...@zorac.cary.nc.us> wrote:
> How hard was it to find those four people? How much above the market
> rate are you paying? When someone else hires one or more of those

> people away with a better deal -- after all, those people would be just
> as beneficial to another person's project -- are you equipped to
> continue to meet your deadlines?

If these four people are completing successful projects together, I bet
they are happy in their jobs and an outsider will find it very hard to
offer 'a better deal'.

Michael Chastain
m...@duracef.shout.net

Piercarlo Grandi

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
>>> On 4 Dec 1995 16:40:19 GMT, "Patrick D. Logan"
>>> <patrick...@ccm.jf.intel.com> said:

Patrick> * Smalltalk is not limited to Single Inheritance for "modelling
Patrick> the world."

Patrick> Any class in Smalltalk can subscribe to any number of
Patrick> interfaces, including the interfaces inherited from its
Patrick> superclass. This is standard operating procedure in Smalltalk.

Patrick> It is foolish to talk about things that one is ignorant of.

Exactly! :-)

Piercarlo Grandi

unread,
Dec 6, 1995, 3:00:00 AM12/6/95
to
>>> On 04 Dec 1995 01:01:51 GMT, rma...@oma.com (Robert C. Martin) said:

Robert> In article <yf3loov...@sabi.demon.co.uk>
Robert> pier...@sabi.demon.co.uk (Piercarlo Grandi) writes:

rmartin> I would claim that most compaines are not getting the benefits
rmartin> promised by OO because they have chosen to use C++ rather than
rmartin> a true OO system (Smalltalk, Objective-C, to name a couple).

Robert> Please watch your attributions.

Please watch whoever quoted you on this. Complain to him/her.

It is loading more messages.
0 new messages