Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What is wrong with OO ?

21 views
Skip to first unread message

Ahmed

unread,
Dec 3, 1996, 3:00:00 AM12/3/96
to a.alk...@dcs.shef.ac.uk

Hello Every Body

I am a new research student working at the field of Object Oriented Technology...I have several
critical opinions about Object Oriented in general, and I like to participate it with you and hear
you expert comments and opinions

Object Oriented Technology came with quite promising claims that if achieved can benefit the software
development companies and organisations millions of pounds.

Some of these claims for instance
1 - high reusability of objects and frameworks
2 - Resilience to change, i.e. low software maintenance and evolution cost
3 - Easier understanding by the user and Natural transition between the analysis, design,
implementation because they all use tangible perceived objects.

However the reality is not so bright as claimed..if so, then nobody today thought to develop a
software on the traditional structural methods...

My question is what is wrong with OO ? why it did not achieved its targets yet.?
What are the main obstacles?

Is the problem with the immature OO methodologies ( OO analysis and design in specific ) ?
or is it the deficiency in the development tools used like C++ or Smalltalk ?
or is it the steep difference in thinking between the traditional and OO schools ?
or is it related with the difficulty of object classification ?
or is it because of vast legacy systems done using the traditional methods ?
or is a combination of many other factors...?

I know that giving a precise answer is very difficult for such a complex question, but I like to
hear the comments of people working at the feild and who suffered from many difficulties..

I would really appreciate any participation, response or even leading to a good reference ,
and would be very grateful if the opinions are supported by some evidences...


Thanks

Yours
Ahmed Alkooheji
University of Sheffield
UK

Bill Gooch

unread,
Dec 3, 1996, 3:00:00 AM12/3/96
to Ahmed

Ahmed wrote:
> ....

> Object Oriented Technology came with quite promising claims that if achieved can benefit the software
> development companies and organisations millions of pounds.
>
> Some of these claims for instance
> 1 - high reusability of objects and frameworks

While this may be claimed about specific frameworks, it is
not IMO a valid generalization about OOT. It is feasible
and important to design and implement objects which achieve
immediate *reuse*, general *reusability* is quite rare, and
exceedingly difficult to achieve, IME. Typically the costs
outweigh the benefits.

To be clear what I mean by "immediate reuse" - it is most
often fine-grained (method and protocol level) reuse of
behavior via inheritance, delegation, etc. which is readily
achievable and most important. Medium-grained (class level)
reuse is also feasible, although it requires greater design
effort and foresight (and/or prior experience in the domain).
Large-grained (framework level) reuse is much harder (I think
somewhat exponentially with the number of classes/protocols/
relationships involved), and much more rarely achieved.

> 2 - Resilience to change, i.e. low software maintenance and evolution cost

This depends entirely on the quality of the analysis, design
and implementation. Objects effectively *support* resilience
by allowing implementations to mirror problems in a way that
minimizes unwanted dependencies, thereby limiting the scope of
changes. However, such results certainly aren't automatic,
and the misconception that resilience is an inherent attribute
of OOT works against the accomplishment of it.

> 3 - Easier understanding by the user and Natural transition between the analysis, design,

I'm very unclear what you mean by "Natural" here, but again,
ease of understanding by anyone is entirely dependent on the
quality of analysis, design and documentation. Again, OOT
used effectively can facilitate ease of understanding, but
that doesn't happen by itself.

> implementation because they all use tangible perceived objects....

Sure, software objects are "tangible perceived objects"
(sometimes perceived anyway), if only inasmuch as we've
decided to *call* them "objects." The more I think about it,
the more this choice of a name for software entities strikes
me as having been a mistake.

--
William D. Gooch bi...@iconcomp.com
Icon Computing http://www.iconcomp.com
Texas liaison for the International Programmers Guild
For IPG info, see http://www.ipgnet.com/ipghome.htm

Bill Gooch

unread,
Dec 3, 1996, 3:00:00 AM12/3/96
to

Bill Gooch wrote:
>
> Ahmed wrote:
> > ....

> > Some of these claims for instance
> > 1 - high reusability of objects and frameworks
>
> While this may be claimed about specific frameworks, it is
> not IMO a valid generalization about OOT. It is feasible
> and important to design and implement objects which achieve
> immediate *reuse*, general *reusability* is quite rare, and
> exceedingly difficult to achieve, IME....

Sorry, that last sentence should have read:

"Although it is feasible and important to design...."
^^^^^^^^

Fred Parker

unread,
Dec 3, 1996, 3:00:00 AM12/3/96
to

Ahmed wrote:

> chop


> My question is what is wrong with OO ? why it did not achieved its targets yet.?
> What are the main obstacles?

> chop
"We don't suffer from a Deficiency of Knowledge,
We suffer from a Deficiency of Execution"

fjpa...@ix.netcom.com

Ell

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Bill Gooch (bi...@iconcomp.com) wrote:

How about all of the objects that are reusable in PowerBuilder? Things
like Window, SQLCA, MLE, SLE, etc. objects which one uses time and time
again. Similarly with other frameworks like MFC, where one uses
CDocument, CDialog, CView etc classes time and time again. With these
objects and classes one has "immediate reuse" and "general reusability",
it seems to me.

Elliott


Ell

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Ell (e...@access4.digex.net) wrote:

Sorry, I missed the "not IMO a valid a generalization about OOT". Still
it seems to me that most everyone uses frameworks nowadays - their own, or
pre-made. I agree with you that given the mechanics and language
differences it's almost impossible to reuse a class/object from one
environment in another. But that doesn't make the goal for reuse within
the same environment any less important.

Elliott


WARREN KENT

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

It does work and their is a tight clique of superstars who dish out new
insights regularly in many trade publications. IMO there are more and more
tools out there each day to maintain cohesive object life cycles for design
coding and testing. Abstraction
can be leveraged into a bug easily enough but again, OO tools for
comprehension and testing virtually eliminate this (sorry about the pun). The
secret of any software is exhaustive testing and so often the pressure is so
great that aside from beta releas
es, the public still is picking up bugs well into the first and second
releases. So even with misleading abstractions, object oreiented development is
less and less difficult each day with more complete tools to visualize,
understand reliability, and maint
ainability in the slippery world of class consciousness. Anyway, that's one
common yarn.

Harry Protoolis

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

On Tue, 03 Dec 1996 17:38:37 +0000, Ahmed <ACQ...@shef.ac.uk> wrote:
>Hello Every Body
>
>I am a new research student working at the field of Object Oriented Technology.
> ..I have several
>critical opinions about Object Oriented in general, and I like to participate it
> with you and hear
>you expert comments and opinions
>
>Object Oriented Technology came with quite promising claims that if achieved can
> benefit the software
>development companies and organisations millions of pounds.
>
>Some of these claims for instance
>1 - high reusability of objects and frameworks
>2 - Resilience to change, i.e. low software maintenance and evolution cost
>3 - Easier understanding by the user and Natural transition between the analysis
> , design,
>implementation because they all use tangible perceived objects.
>
>However the reality is not so bright as claimed..if so, then nobody today though
> t to develop a
>software on the traditional structural methods...
>
>My question is what is wrong with OO ? why it did not achieved its targets yet.?
>What are the main obstacles?

I think this is overly negative, OO has not been and never will be a
'silver bullet' to solve all software development problems, but no-one
but a few spin doctors ever claimed it would be.

However, the real question should be 'has OO made a significant positive
difference', and in my experience the answer is a resounding 'yes!'.

I have been a professional software engineer for 10 years now, the first
half of which was spent fighting against traditional structured
techinques, it was only despite them I was able to get anything
finished.

The traditional techniques all suffered from a number of significant
flaws. Perhaps the most damaging one was what I (rather unkindly) think
of as 'The glorification of idiots' phenomenon. What I mean by this is
that projects were typically infested by a group of people who never
wrote any software, but spent most of the budget drawing diagrams that
the implementors never used.

The main contribution of OO has been was could be termed 'The
glorification on the implementor'. This has been achieved by the
effective marriage of Analysis, Design and Implementation. The result
is that every member of the team does all three of the key tasks.

In fact IMHO an OO team has no place for anyone who cannot do all
three tasks. Jim Coplein wrote an excellent pattern called
'Architect also Implements' which covers very nicely the reasoning
behind not allowing non-implementors to design systems.

Certainly the mecca of automatic reuse has not been achieved, but the
quantity and quality of 3rd party components available for most OO
languages already exceeds that available for their non-OO counterparts,
and IHMO this suggests a bright future.

Certainly OO has not made writing software trivial or automatic, but
then, *nothing ever will*.

Cheers,
Harry
-
alt.computer pty ltd software development consultants


Robert C. Martin

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

In article <32A465...@shef.ac.uk>, Ahmed <ACQ...@shef.ac.uk> wrote:

> Object Oriented Technology came with quite promising claims that if
achieved can benefit the software
> development companies and organisations millions of pounds.

Those claims were not made by the engineers and researchers who "invented"
OO. They were made by marketeers who found a new way to differentiate
products, and by engineers who had shut off their ability to employ
critical thinking.

>
> Some of these claims for instance
> 1 - high reusability of objects and frameworks
> 2 - Resilience to change, i.e. low software maintenance and evolution cost
> 3 - Easier understanding by the user and Natural transition between the

analysis, design,

> implementation because they all use tangible perceived objects.
>
> However the reality is not so bright as claimed..if so, then nobody

today thought to develop a


> software on the traditional structural methods...
>
> My question is what is wrong with OO ? why it did not achieved its
targets yet.?
> What are the main obstacles?

The claims were too grandiose. Software is still software; and it is still
hard. There are still bugs, still ambiguous specifications, still volatile
specifications, still improperly trained engineers, still engineers who
shouldn't be engineers, still managers who don't understand the technology
they are trying to manage, still arbitrary completion dates, etc, etc, etc.

In any case, you shouldn't be surprised when highly publicized claims are
not achieved. Generally those claims are just part of the overall hype
associated with any new idea.

The truth is: (If I may be so bold as to claim to know the truth ;)

1- OO, when properly employed, does enhance the reusability of
software. But it does so at the cost of complexity and design
time. Reusable code is more complex and takes longer to design
and implement. Futhermore, it often takes two or more tries
to create somthing that is even marginally reusable.

2- OO, when properly employed, does enhance the software's resilience
to change. But it does so at the cost of complexity and design
time. This trade off is almost always a win, but it is hard to
swallow sometimes.

3- OO does not necessarily make anything easier to understand.
There is no magical mapping between the software concepts and
every human's map of the real world. Every person is different.
What one person percieves to be a simple and elegant design, another
will percieve as convoluted and opaque.

4- If a team has been able, by applying point 1 above, to create
a repository of reusable items, then development times can begin
to shrink significantly due to reuse.

5- If a team has been able, by applying point 2 above, to create
software that is resilient to change, then maintenance of that
software will be much simpler and much less error prone.

In short. Nothing has gone wrong with OO. It is a technology, and it
delivers everything it was designed to deliver, and perhaps a bit more.
That it doesn't live up to the naive claims made by naive or insincere people
is unfortunate, but not unexpected.

>
> Is the problem with the immature OO methodologies ( OO analysis and
design in specific ) ?

No, these techniques have been a major contribution to software engineering,
and have gone a long way towards improving the way we build software.

> or is it the deficiency in the development tools used like C++ or Smalltalk ?

No, the tools are more or less adequate for the job. IMHO, someone
who blames a language for a failure should be looking a bit closer to
home for the cause.

> or is it the steep difference in thinking between the traditional and OO
schools ?

I don't think it's the steepness of the difference, although the difference
can be very steep. Instead I think that it is the disagreement by OO
authorities on the endpoint of that learning curve.

For example, some folks will tell you that the secret of OO is think of the
world in terms of objects. Others will tell you that it is to think
of the structure of the software in terms of polymorphic interfaces. Still
others will tell you that it is to decouple the domains of the problem
by describing them using macros that can be statically bound at compile time.

Which is right? Which is real? There is a *lot* of confusion out there.
That some folks might not be experiencing any of the benefits of OO does
not surprise me. (BTW, my own choice is one about structuring the software
in terms of polymorphic interfaces)

--
Robert C. Martin | Design Consulting | Training courses offered:
Object Mentor | rma...@oma.com | Object Oriented Design
14619 N Somerset Cr | Tel: (847) 918-1004 | C++
Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com

"One of the great commandments of science is:
'Mistrust arguments from authority.'" -- Carl Sagan

Robert C. Martin

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

In article <slrn5a9o60...@matilda.alt.net.au>,
ha...@matilda.alt.net.au (Harry Protoolis) wrote:

> The traditional techniques all suffered from a number of significant
> flaws. Perhaps the most damaging one was what I (rather unkindly) think
> of as 'The glorification of idiots' phenomenon. What I mean by this is
> that projects were typically infested by a group of people who never
> wrote any software, but spent most of the budget drawing diagrams that
> the implementors never used.

Much to my dismay, there are some OO methods that are promoting
the same scheme. The "analyst" draw nice pretty little diagrams, and
even run them through simulators to "prove" that they work. These
diagrams are then run through a program that generates code. Programmers
who maintain that code generator have to make sure that the "right" code
is generated. They have to make the program work.

In another case, I have worked with a client who had a bunch of
"architects" doing nothing but drawing pretty Booch diagrams and
then throwing them over the wall to a bunch of programmers. The
programmers hated the architects and ignored what they produced.

>
> In fact IMHO an OO team has no place for anyone who cannot do all

> three tasks. [Analysis, Design, and Implementation]

Agreed, emphatically.

> Jim Coplein wrote an excellent pattern called
> 'Architect also Implements' which covers very nicely the reasoning
> behind not allowing non-implementors to design systems.

Software architects who do not implement will be ignored by the
people who actually *do* implement. An architect cannot be effective
unless he/she really understands the problems that the implementors
are facing today, now, this minute.

Bill Gooch

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Ell wrote:
>
> : Bill Gooch wrote:
> : >
> : > Ahmed wrote:
> : > > ....
> : > > Some of these claims for instance

> : > > 1 - high reusability of objects and frameworks
> : >
> : > While this may be claimed about specific frameworks, it is
> : > not IMO a valid generalization about OOT. [Although] It is feasible

> : > and important to design and implement objects which achieve
> : > immediate *reuse*, general *reusability* is quite rare, and
> : > exceedingly difficult to achieve, IME....
>
> How about all of the objects that are reusable in PowerBuilder? Things
> like Window, SQLCA, MLE, SLE, etc. objects which one uses time and time
> again. Similarly with other frameworks like MFC, where one uses
> CDocument, CDialog, CView etc classes time and time again....

Sure, there are generally reusable thingies out there,
but mostly they are either of a very generic nature (like
dialogs and documents), or they are targeted at a vertical
niche market. In either case, the effort to develop them
is much greater than that required for quality application-
specific code, and their reusability is still limited. If
you hit their limits, then you have to either extend them
at your own expense, or start from scratch. This can be a
very painful experience, depending on the circumstances (a
few years ago, I had the misfortune of needing to extend
Borland's OWL/C++ in Windows 3.1 - what a nightmare!).

The bulk of OO code does not achieve general reusability,
or any reusability at all outside of a very narrow scope.
OTOH, immediate reuse is fairly common (I'd say it's a key
characteristic of quality OO software).

Ahmed

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Bill Gooch wrote:
>
> Ahmed wrote:
> > ....
> > Object Oriented Technology came with quite promising claims that if achieved can benefit the software
> > development companies and organisations millions of pounds.
> >
> > Some of these claims for instance
> > 1 - high reusability of objects and frameworks
>
> While this may be claimed about specific frameworks, it is
> not IMO a valid generalization about OOT. It is feasible

> and important to design and implement objects which achieve
> immediate *reuse*, general *reusability* is quite rare, and
> exceedingly difficult to achieve, IME. Typically the costs
> outweigh the benefits.
>
> To be clear what I mean by "immediate reuse" - it is most
> often fine-grained (method and protocol level) reuse of
> behavior via inheritance, delegation, etc. which is readily
> achievable and most important. Medium-grained (class level)
> reuse is also feasible, although it requires greater design
> effort and foresight (and/or prior experience in the domain).
> Large-grained (framework level) reuse is much harder (I think
> somewhat exponentially with the number of classes/protocols/
> relationships involved), and much more rarely achieved.
>

Actually immediat reuse can be acheived to a certain extent with
the traditional structural methods if they adopted a good design

What I understand from this is that it is not convinient to reuse
objects of other applications because they are built with different
perspectives..

Does this mean,If two organizations developed almost typical applications
does not mean that the objects developed can be reusable between them..
Is not this a deficiency in OO.
Every programmer is tackling the same problem using his own perception
of the problem..his own abstraction..
The concept behind OO is that it deals with peices of software as
tangible objects exactly as real world works..however in real world
every object has a clear behaviour and perception by every body,
while in the OO software each object has a behaviour according to
the perception of his designer..!!

The problem is that many organization avoid moving toword OO because
the transfter cost to OO ( training programmers / organization change in
standards / new tools / new analysis and design methods / legacy
system/ etc. ) are much higher than the benifit of "immediate reuse"

Another point regarding inheritance, we know that Visiual Basic does not
have the capability of inheritance, however you can build a system
much faster compared to using visiual C++ with much less code.

I am not saying that we should move to the traditional structural methods
No, I have suffered enough from it, I actually like OO because of its
strong features..But I want to know why it is not moving so fast..
Regardless of the huge amout of push it got by the major players in the
software
industry..I believe that OO is still not mature enough in certain
aspects.
and this is what I am trying to find..


Cheers
Ahmed

Piercarlo Grandi

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

>>> "ACQ95AA" == Ahmed <ACQ...@shef.ac.uk> writes:

ACQ95AA> Hello Every Body I am a new research student working at the
ACQ95AA> field of Object Oriented Technology...I have several critical
ACQ95AA> opinions about Object Oriented in general, and I like to
ACQ95AA> participate it with you and hear you expert comments and
ACQ95AA> opinions

ACQ95AA> Object Oriented Technology came with quite promising claims
ACQ95AA> that if achieved can benefit the software development companies
ACQ95AA> and organisations millions of pounds.

ACQ95AA> Some of these claims for instance
ACQ95AA> 1 - high reusability of objects and frameworks

Of objects? What do you mean?

As to ``frameworks'', which I choose to interpret here as ``libraries of
software modules'', there is no guarantee that by using OO one _does_
achieve _high_ reusability; one _can_ achieve _higher_ reusability.

Whether higher reusability _is_ achieved depends on many factors other
than the adoption of OO technology, and whether the reusability achieved
is _high_ depends among other things on the problem domain.

That it is _possible_ to achieve _higher_ reusability with OO than other
approaches is substantiated by some sparse but compelling evidence.

ACQ95AA> 2 - Resilience to change, i.e. low software maintenance and
ACQ95AA> evolution cost

As a _possible_ consequence of _possibly_ _higher_ reuse. Again, there
is some sparse but compelling evidence that this actually happens.

ACQ95AA> 3 - Easier understanding by the user and Natural transition
ACQ95AA> between the analysis, design, implementation because they all
ACQ95AA> use tangible perceived objects.

This is not a claim in OO technology, but in OO speak, in other words it
is purely marketing hype unsubstantiated by any evidence whatsoever. You
won't find any such claim in anything but marketing drivel.

Any such claim, as you write it, is also manifestly absurd: the very
notion of something that is both "tangible perceived" is amusing to say
the least.

ACQ95AA> However the reality is not so bright as claimed..

Indeed, because most all OO-speak salesmen paint a rosy picture as you
describe it above.

OO in and by itself does not magically and necessarily "achieve" the
magic of "high reusability", and in particular because there is no
reason why the use of "tangible perceived objects" should give any
benefit like "Easier understanding by the user".

ACQ95AA> if so, then nobody today thought to develop a software on the
ACQ95AA> traditional structural methods...

Software technologies depend more on sociological than technological
factors. In particular on the twenty-year cycle of induction of new
generations of computer scientists in industry, and their reaching
``manager'' status.

ACQ95AA> My question is what is wrong with OO ? why it did not achieved
ACQ95AA> its targets yet.? What are the main obstacles?

Inflated expectations? Marketing drivel? Facile abuse of OO-speak?

Thsoe that do practice OO as a technology and not as the promise of the
age of Acquarium in CS find it a very useful concept that does deliver
some measurable benefits.

I find the discussion of OO and other issues in the second edition of
"The Mythical Man Month" a rather good argumentation of some of the
issues involved.

Ahmed

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to a.alk...@dcs.shef.ac.uk

Harry Protoolis wrote:
>
> On Tue, 03 Dec 1996 17:38:37 +0000, Ahmed <ACQ...@shef.ac.uk> wrote:
> >Hello Every Body

> >Object Oriented Technology came with quite promising claims that if achieved can
> > benefit the software
> >development companies and organisations millions of pounds.


> >
> >Some of these claims for instance

> >1 - high reusability of objects and frameworks

> >2 - Resilience to change, i.e. low software maintenance and evolution cost
> >3 - Easier understanding by the user and Natural transition between the analysis
> > , design,


> >implementation because they all use tangible perceived objects.
> >
> >However the reality is not so bright as claimed..if so, then nobody today though

> > t to develop a
> >software on the traditional structural methods...
> >
> >My question is what is wrong with OO ? why it did not achieved its targets yet.?


> >What are the main obstacles?
>

> I think this is overly negative, OO has not been and never will be a
> 'silver bullet' to solve all software development problems, but no-one
> but a few spin doctors ever claimed it would be.
>
> However, the real question should be 'has OO made a significant positive
> difference', and in my experience the answer is a resounding 'yes!'.
>


Dear Harry,
I agree with you that OO has many advantages, but I can not feel that significant improvement
as you said,

The important question is how measure the success of OO,
Can you please tell me on what crieteria you mesured this significant difference
is it
( code reusibility / software development time / software performace / software reliablity/
software cost / software portablity / ...etc .. ) these issues that count for any organization

actually I am looking for any references that compares " with figures and statistics"
between different applications developped using OO and the traditional methods.

All what I have found are examples that show OO is workable, for me this
is not an evidence to the significant difference"


Another thing, Since you are familiar with OO,
Could you please tell me what is the best environment to develop an OO application,
( in my case most of our applications are database systems )

Thank you very much

Regards,
Ahmed

Roger T.

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to


Harry Protoolis <ha...@matilda.alt.net.au> wrote in article
<slrn5a9o60...@matilda.alt.net.au>...


> >My question is what is wrong with OO ? why it did not achieved its
targets yet.?
> >What are the main obstacles?
>

> The traditional techniques all suffered from a number of significant
> flaws. Perhaps the most damaging one was what I (rather unkindly) think
> of as 'The glorification of idiots' phenomenon. What I mean by this is
> that projects were typically infested by a group of people who never
> wrote any software, but spent most of the budget drawing diagrams that
> the implementors never used.

I agree with your thesis in general but I would like to point out that the
the glorification of idiots also included the glorification of those
implementors
that had no use at all for any high level analysis and design efforts.

There are plenty of implementors who could be called idiots for engaging in
what I call "stream of consciousness" coding.

> The main contribution of OO has been was could be termed 'The
> glorification on the implementor'. This has been achieved by the
> effective marriage of Analysis, Design and Implementation. The result
> is that every member of the team does all three of the key tasks.

This is true but the question I wonder about is how much importance and
therefore time does the implementor invest in the Analysis and Design
parts.

The most important result of OO is that it encourages the implementor to
value
these development stages more highly than he might otherwise.

Implementation is the ultimate goal and OO is a means to reach
that goal and also provide a quality product.

> In fact IMHO an OO team has no place for anyone who cannot do all

> three tasks. Jim Coplein wrote an excellent pattern called


> 'Architect also Implements' which covers very nicely the reasoning
> behind not allowing non-implementors to design systems.

Agree.

> Certainly the mecca of automatic reuse has not been achieved, but the
> quantity and quality of 3rd party components available for most OO
> languages already exceeds that available for their non-OO counterparts,
> and IHMO this suggests a bright future.

Agree.

> Certainly OO has not made writing software trivial or automatic, but
> then, *nothing ever will*.

It will make writing some ever-growing subset of software trivial or
automatic
and free us to attack more difficult coding problems.

Roger T.

Bill Gooch

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to Ahmed

Ahmed wrote:
>
> Actually immediat reuse can be acheived to a certain extent with
> the traditional structural methods if they adopted a good design

A key phrase here is "to a certain extent." OO allows
more effective reuse (less redundancy, less copy-and-edit)
than alternatives.

> What I understand from this is that it is not convinient to reuse
> objects of other applications because they are built with different
> perspectives..

I think "not convenient" is a bit of an understatement -
"very difficult" might typically be more accurate.

> Does this mean,If two organizations developed almost typical applications
> does not mean that the objects developed can be reusable between them..
> Is not this a deficiency in OO.

As compared to what? Non-OO software? I think not.

Two different automobile designs rarely share any
compatible parts (except those which are industry-
standardized, like oil filters), unless the designers
worked together with that goal in mind.

> Every programmer is tackling the same problem using his own perception
> of the problem..his own abstraction..

Yes, and the alternative is?...

> The concept behind OO is that it deals with peices of software as
> tangible objects exactly as real world works..

Not at all. "How the real world works" is by no means
obvious or well understood ("real world" in itself is
an exceedingly vague term), and you'd need to provide
some definitions of these things, as well as evidence
to support the above assertion.

> however in real world
> every object has a clear behaviour and perception by every body,

Not in the slightest.

> while in the OO software each object has a behaviour according to
> the perception of his designer..!!

Sometimes. The designer probably hopes it does.

> The problem is that many organization avoid moving toword OO because
> the transfter cost to OO ( training programmers / organization change in
> standards / new tools / new analysis and design methods / legacy
> system/ etc. ) are much higher than the benifit of "immediate reuse"

OK - why is this a problem?

> Another point regarding inheritance, we know that Visiual Basic does not
> have the capability of inheritance, however you can build a system
> much faster compared to using visiual C++ with much less code.

Depends what system, doesn't it? VB isn't ideal for
all computer applications; C++ is probably a better
choice for at least some of them.

> I am not saying that we should move to the traditional structural methods
> No, I have suffered enough from it, I actually like OO because of its
> strong features..But I want to know why it is not moving so fast..

Patience is a virtue. Rapid growth and early acceptance
can lead to backlash and equally rapid decline.

Dr. Richard Botting

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

(Followups trimmed to comp.object and comp.software-eng)
Robert C. Martin (rma...@oma.com) wrote:
: In another case, I have worked with a client who had a bunch of

: "architects" doing nothing but drawing pretty Booch diagrams and
: then throwing them over the wall to a bunch of programmers. The
: programmers hated the architects and ignored what they produced.

Software people have four traditional ways of handling any problems:
(1) ignore it
(2) invent a tool
(3) try to be methodical
(4) define your program to be the solution and standardize it.

OO seems to have inheritted these virtual methods.
(HHOS)

--
dick botting http://www.csci.csusb.edu/dick/signature.html
Disclaimer: CSUSB may or may not agree with this message.
Copyright(1996): Copy freely but say where it came from.
I have nothing to sell, and I'm giving it away.

Joe Winchester

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Harry Protoolis wrote:

> In fact IMHO an OO team has no place for anyone who cannot do all
> three tasks. Jim Coplein wrote an excellent pattern called
> 'Architect also Implements' which covers very nicely the reasoning
> behind not allowing non-implementors to design systems.

Harry,

Do you know where I might be able to get hold of Coplien's pattern.

Regards,

--
Joe Winchester
Jo...@concentric.net
103276,2...@Compuserve.com

Matthew Gream

unread,
Dec 4, 1996, 3:00:00 AM12/4/96
to

Hi Ahmed,

On Tue, 03 Dec 1996 17:38:37 +0000, Ahmed (ACQ...@shef.ac.uk) wrote:

> Object Oriented Technology came with quite promising claims that if
> achieved can benefit the software development companies and
> organisations millions of pounds.

OO is no different from many other technologies. It comes with many
promises, and it can fulfill these if used in the right situations in
the right way. It is not a panacea, which is not a new or surprising
statement.

When you consider "Object Oriented Technology", you must also consider
Domain Analysis, Architecture, Patterns and other lifecycle goodies
which are not inherently Object Oriented, but seem to be a necessary
requirement for successful OO benefits.

> Some of these claims for instance
> 1 - high reusability of objects and frameworks

This can be achieved, but it requires discipline, investment and
experience. Discpline to work hard towards reusability, and to
maintain that reusability (though evolution). Investment in terms of
the effort involved to establish generic solutions (generic solutions
are generally [:-)] much harder than specific solutions). Experience
to make the correct decisions when constructing re-usable items (to
have some degree of visibility).

> 2 - Resilience to change, i.e. low software maintenance and evolution cost

How flexible is the software? This has a lot to do with architecture:
"if you don't get the architecture right up front, you may as well pack
up and go home" [quote made by a fellow engineer]. The architecture in
many ways predicts what will happen to the software over its
lifecycle. If you don't get this right, you will need to change the
architecture: this is usually not a trivial task. This is not exclusively
an OO issue though.

OO includes inheritence. This promotes generalisation -- factoring out
commonalities -- which reduces dependencies. Reduction in dependencies
makes maintenance and evolution more predictable and cheaper. It is
perhaps predictability that is more important than anything, better to
correctly assess that all the costs are high up front, before starting,
rather than finding out later.

There are also CASE tools, which make evolution and maintenance much
easier and achievable (they also keep you focused at a higher level of
logic). Having a CASE tool take care of menial details (file
organisation, includes, class definitions, method stubs, etc) and take
over some of the verification roles (use cases and scenarios) is very
important. Though, CASE tools are not inherently an OO thing.

There are probably many more items you can mention here as well.

> 3 - Easier understanding by the user and Natural transition between the analysis, design,

> implementation because they all use tangible perceived objects.

The transition is definitely a good thing. Being able to iteratively
build software is much more predictable as well. What you've said seems
to be much easier to say that to do, from the experience I've seen
around me. Getting the right architecture and objects up front requires
experience (and therefore, knowledge). It also requires an appropriate
balance between the actual system requirements, the system domain and
other domains. This requires time and experience.

> However the reality is not so bright as claimed..if so, then nobody today thought to develop a


> software on the traditional structural methods...

> My question is what is wrong with OO ? why it did not achieved its targets yet.?
> What are the main obstacles?

I would say that it is slowely acheiving its targets and that there are
three main inter-related obstacles: time, experience and collaboration.
We need more of these to help the overall feedback loop.

> Is the problem with the immature OO methodologies ( OO analysis and design in specific ) ?

> or is it the deficiency in the development tools used like C++ or Smalltalk ?

> or is it the steep difference in thinking between the traditional and OO schools ?

> or is it related with the difficulty of object classification ?
> or is it because of vast legacy systems done using the traditional methods ?
> or is a combination of many other factors...?

All of these would seem to be problems from my, limited, experience. The
thinking "mindset" is perhaps one of the most important.

> I know that giving a precise answer is very difficult for such a complex question, but I like to
> hear the comments of people working at the feild and who suffered from many difficulties..

> I would really appreciate any participation, response or even leading to a good reference ,
> and would be very grateful if the opinions are supported by some evidences...

You want evidence ? I need more experience :-). Please excuse my bias
towards architecture in the above as well, I think that architecture
and organisation are very important. Architecture is everywhere, from
the big to the small (in the software, in the process, in the people,
in the organisation, etc). Most of the software problems I have
encountered can be traced back to architectural issues of one form or
another.

Cheers,
Matthew.

--
Email: Matthe...@Jtec.com.au
Phone: (02) 390-0194
Fax: (02) 364-0055
Post: Jtec (R&D) Pty Ltd.
Unit 3, 118-122 Bowden St.
Meadowbank NSW 2114.

Tom Bushell

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

On Wed, 04 Dec 1996 08:45:22 -0600, rma...@oma.com (Robert C. Martin)
wrote:

>ha...@matilda.alt.net.au (Harry Protoolis) wrote:
>
>> The traditional techniques all suffered from a number of significant
>> flaws. Perhaps the most damaging one was what I (rather unkindly) think
>> of as 'The glorification of idiots' phenomenon. What I mean by this is
>> that projects were typically infested by a group of people who never
>> wrote any software, but spent most of the budget drawing diagrams that
>> the implementors never used.
>

>Much to my dismay, there are some OO methods that are promoting
>the same scheme. The "analyst" draw nice pretty little diagrams, and
>even run them through simulators to "prove" that they work. These
>diagrams are then run through a program that generates code. Programmers
>who maintain that code generator have to make sure that the "right" code
>is generated. They have to make the program work.

It is my growing opinion that this is a fundamental problem with all
"formal" design methods, not just OO design. The effort involved in
doing the design is as great or greater than doing the construction
(coding). Contrast this with doing the blueprints for a bridge - the
design effort is orders of magnitude cheaper than the construction.
(Or so I believe - a civil engineer might correct me on this). Also,
the OO design models I've studied don't seem to be very good maps of
actual real world systems - there seems to be a big gap between high
level architecture and running code. I believe there should be a
fairly smooth continuim from high level to low level of detail.

I'm starting to believe that design and code don't make sense as
separate entities - the design should _become_ the code - the design
documents for an implemented system are used as the foundation of the
code, and then regenerated from the code. Major benefits would be
that design work would not be discarded because it was too difficult
to bring it up to date with reality. Therefore, the design should
never get out of synch. This a similar idea to reverse engineering,
but not identical.

If anyone has knows of tools that would facilitate this approach, I'd
certainly be interested. I've done some very simple prototypes, and
hope to work on the idea in future (when I have more time - Hah!).

-Tom

----------------------------------------------------------
Tom Bushell * Custom Software Development
Telekinetics * Process Improvement Consulting
2653 Highway 202 * Technical Writing
RR#1, Elmsdale, NS
B0N 1M0
(902)632-2772 Email: tbus...@fox.nstn.ns.ca
----------------------------------------------------------

Ell

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

Bill Gooch (bi...@iconcomp.com) wrote:

: Ahmed wrote:
: >
: > Actually immediat reuse can be acheived to a certain extent with
: > the traditional structural methods if they adopted a good design

: A key phrase here is "to a certain extent." OO allows
: more effective reuse (less redundancy, less copy-and-edit)
: than alternatives.

: > What I understand from this is that it is not convinient to reuse
: > objects of other applications because they are built with different
: > perspectives..

: I think "not convenient" is a bit of an understatement -
: "very difficult" might typically be more accurate.

In my experience, and observations, it is not "cross application" reuse
of classes/objects that is a problem as much as it is "cross environment"
reuse of classes/objects. Especially wrt physical design reuse. I.e.
getting classes/objects to physically work across different environments
is difficult indeed.

: Two different automobile designs rarely share any

: compatible parts (except those which are industry-
: standardized, like oil filters), unless the designers
: worked together with that goal in mind.

: > Every programmer is tackling the same problem using his own perception
: > of the problem..his own abstraction..

: Yes, and the alternative is?...

Relying on domain experts for fundamental application semantics and as
well relying on domain experts to determine implementation necessaries for
application use. Relying on domain experts for implementation of Use
Cases.

: > The concept behind OO is that it deals with peices of software as


: > tangible objects exactly as real world works..

Yes! Well not *exactly* as the real world operates, but in a way that
utilizes, and is anchored upon "real world" domain abstractions,
patterns, and semantics.

: Not at all. "How the real world works" is by no means

: obvious or well understood ("real world" in itself is
: an exceedingly vague term), and you'd need to provide
: some definitions of these things, as well as evidence
: to support the above assertion.

If we grasp, as in you have alluded to in many of your previous posts that
development should start with understanding domain abstractions and
relationships how is that different from basing project analysis and
architecture on "tangible objects exactly as the real world operates".

: > however in real world


: > every object has a clear behaviour and perception by every body,

: Not in the slightest.

The perception of object behavior ranges in various cases between being
very clear to all to only discernable to a handful.

: > while in the OO software each object has a behaviour according to


: > the perception of his designer..!!

: Sometimes. The designer probably hopes it does.

Yes, the pragmatists and empiricists hope that they can do whatever they
want to analysis and physical design based on their narrow inclinations.

In actuality there is an objective reality (or at the very least objective
human conception) behind what goes on in an application and its domain
that developers should attempt to model as closely as possible.

: > The problem is that many organization avoid moving toword OO because


: > the transfter cost to OO ( training programmers / organization change in
: > standards / new tools / new analysis and design methods / legacy
: > system/ etc. ) are much higher than the benifit of "immediate reuse"

: OK - why is this a problem?

Because "immediate reuse" should not be the only, or main criteria by
which an organization adopts one development paradigm or another (e.g. OO
vs. others), as I see it.

: > I am not saying that we should move to the traditional structural methods


: > No, I have suffered enough from it, I actually like OO because of its
: > strong features..But I want to know why it is not moving so fast..

Seems pretty fast to me. From what I read in most literature in the
computer field, or area, OO is de rigeur - virtually the only thing being
talked about as a development paradigm. That is even in the mainframe
world.

: Patience is a virtue. Rapid growth and early acceptance


: can lead to backlash and equally rapid decline.

Excelsior! As quickly as possible! OO has at least nearly 30 years of
growth.

Elliott


Ell

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

Tom Bushell (tbus...@fox.nstn.ns.ca) wrote:

: rma...@oma.com (Robert C. Martin) wrote:
: >
: >ha...@matilda.alt.net.au (Harry Protoolis) wrote:
: >>
: >> The traditional techniques all suffered from a number of significant
: >> flaws. Perhaps the most damaging one was what I (rather unkindly) think
: >> of as 'The glorification of idiots' phenomenon. What I mean by this is
: >> that projects were typically infested by a group of people who never
: >> wrote any software, but spent most of the budget drawing diagrams that
: >> the implementors never used.

: >Much to my dismay, there are some OO methods that are promoting
: >the same scheme. The "analyst" draw nice pretty little diagrams, and
: >even run them through simulators to "prove" that they work. These
: >diagrams are then run through a program that generates code. Programmers
: >who maintain that code generator have to make sure that the "right" code
: >is generated. They have to make the program work.

: It is my growing opinion that this is a fundamental problem with all
: "formal" design methods, not just OO design. The effort involved in
: doing the design is as great or greater than doing the construction
: (coding).

Even if "doing" good OO analysis (drawing "nice pretty little diagram" and
*much* more) cost more time and dollars than OO coding, it pays off
because doing good OO analysis is generally decisive to the most rapid and
Use Case effective development of the project.

: I'm starting to believe that design and code don't make sense as


: separate entities - the design should _become_ the code - the design
: documents for an implemented system are used as the foundation of the
: code, and then regenerated from the code.

And if you think about it, the only real way for what you call "design" to
"become" the code is if a perspective larger than being mainly focused on
physical coding *leads* physical coding.

: Major benefits would be


: that design work would not be discarded because it was too difficult
: to bring it up to date with reality. Therefore, the design should
: never get out of synch.

This is precisely the approach of Booch/Coad/Jacobson/Rumbaugh from what I
know about their methods. Physical design and coding is based on/rooted
in analysis concepts. Physical design and coding are simply additive to
the analysis models.

Elliott


Brian Gridley

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

>: > Every programmer is tackling the same problem using his own perception
>: > of the problem..his own abstraction..
>
>: Yes, and the alternative is?...
>

Communication, teams, and a development environment 2 steps beyond ENVY.

If all the code that has been written in various shops became available to the
general developing public within 3 months of it's creation, we would have spent
far less time repeating ourselves, and have the most robust development
environment and reuseable code, and have eliminated all other language
competitors (not ST-based). Instead, we are still all solving the same problems
, over and over again. Is anyone interested in building a communal repository
with a system for charging users and rewarding developers for code?


Marnix Klooster

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

tbus...@fox.nstn.ns.ca (Tom Bushell) wrote:

> It is my growing opinion that this is a fundamental problem with all
> "formal" design methods, not just OO design. The effort involved in
> doing the design is as great or greater than doing the construction

> (coding). Contrast this with doing the blueprints for a bridge - the
> design effort is orders of magnitude cheaper than the construction.
> (Or so I believe - a civil engineer might correct me on this). Also,
> the OO design models I've studied don't seem to be very good maps of
> actual real world systems - there seems to be a big gap between high
> level architecture and running code. I believe there should be a
> fairly smooth continuim from high level to low level of detail.

Couldn't agree with you more.

> I'm starting to believe that design and code don't make sense as
> separate entities - the design should _become_ the code - the design
> documents for an implemented system are used as the foundation of the

> code, and then regenerated from the code. Major benefits would be


> that design work would not be discarded because it was too difficult
> to bring it up to date with reality. Therefore, the design should

> never get out of synch. This a similar idea to reverse engineering,
> but not identical.

One approach in the direction you sketch is the formal method
called the "refinement calculus". Essentially, a (formal)
specification is considered to be a very high-level
non-executable program, and programming tries to `refine' that
program to an equivalent one that is executable. The refinement
calculus gives formal proof rules with which refinements can be
proven correct. Therefore, the side-effect of developing a
program this way is a proof that it meets its specification. In
other words, we have a `provably correct program.'

> If anyone has knows of tools that would facilitate this approach, I'd
> certainly be interested. I've done some very simple prototypes, and
> hope to work on the idea in future (when I have more time - Hah!).

Because of the emphasis on proof, the refinement calculus
requires more mathematical skills of the programmer than other
methods. Also, for larger programs having some kind of proof
support tool is a necessity. Finally, it often happens that the
specification must be changed halfway. With proper tool support
it should be easy to check which refinements still hold, and
which don't. Such tools are under development; try the "Formal
methods" web page at

http://www.comlab.ox.ac.uk/archive/formal-methods.html

If you want to know more on the refinement calculus, you could
begin with Carroll Morgan, "Programming from Specifications",
Second Edition, Prentice-Hall, 1994, ISBN 0-13-123274-6.

> -Tom

Groetjes,

<><

Marnix
--
Marnix Klooster | If you reply to this post,
mar...@worldonline.nl | please send me an e-mail copy.

Tim Ottinger

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

Matthew Gream wrote (And all in all, a thoughtful posting):
[in response to question:]

> > 2 - Resilience to change, i.e. low software maintenance and evolution cost
>
> How flexible is the software? This has a lot to do with architecture:
> "if you don't get the architecture right up front, you may as well pack
> up and go home" [quote made by a fellow engineer]. The architecture in
> many ways predicts what will happen to the software over its
> lifecycle. If you don't get this right, you will need to change the
> architecture: this is usually not a trivial task. This is not exclusively
> an OO issue though.

By the way, while this is "essentially" right, there are plenty of cases
where people did not build the correct architecture the first time out.
In fact, many of us will tell you that it's nigh impossible to build it
right the first time out if the software project is interesting enough.

The iterative incremental model ain't perfect, but using short cycles
and daring to revisit can cover an awful lot of design sins. If you
commit to an architecture, then you're probably stuck.

I propose the motto "design by distrust!". If the work is suitably
isolated from the physical and business requirements, then you can
reliably insulate yourself from those things you don't believe will
be invariant, even if you don't know which direction they'll go in.

Of course, this *is* your next point, but I wanted to stress that
it's not a top-down thing, and the process is not completely
unforgiving. Otherwise, we'd be better off outside of OO for large
projects.

The architecture thing, even the business model, can be essentially
right and correctably wrong, and sometimes it's a close to perfection
as you'll get. Especially early on in a project.

> OO includes inheritence. This promotes generalisation -- factoring out
> commonalities -- which reduces dependencies. Reduction in dependencies
> makes maintenance and evolution more predictable and cheaper. It is
> perhaps predictability that is more important than anything, better to
> correctly assess that all the costs are high up front, before starting,
> rather than finding out later.

Design by distrust! Estimate by distrust!

> > Is the problem with the immature OO methodologies ( OO analysis and design in specific ) ?
> > or is it the deficiency in the development tools used like C++ or Smalltalk ?
> > or is it the steep difference in thinking between the traditional and OO schools ?
> > or is it related with the difficulty of object classification ?
> > or is it because of vast legacy systems done using the traditional methods ?
> > or is a combination of many other factors...?
>
> All of these would seem to be problems from my, limited, experience. The
> thinking "mindset" is perhaps one of the most important.

To begin with, I don't accept the idea that there is a "problem".
I'll talk about that in a second.

Secondly, I think that the difference in thinking between OO and
structured is a matter of breaking mental habits, not the deep and
nearly incrossible chasm people make it out to be. We have people
who jump from COBOL to C++ and OO in one great leap. We have a
number of people who came from procedural 4GLs to OO. It happens
very frequently, maybe every week or every day all over the world
with thousands and thousands of people.

A lot of the "problem" is with organizations. If they've been
successful
to any degree with other methods (including brute force) then they are
unlikely to "mess with the formula" by going into OO. A lot of people
still use Fortranm, COBOL, RPG, etc., successfully - untouched by the
Client/Server revolution, CASE, even RDBMS. No person or company is
obliged to stay "technically current", and so many will not. And they
might do fine at it.

The fact of the matter is that OO is a change in management and project
planning as much as anything else, and a lot of managers aren't too
keen on having their world ripped out from under them. That's just
an education thing. It'll come in time, but the focus on managing
OO is fairly recent in the scheme of things.

There's an underlying myth of the question which was originally asked.
"If method A is good, then why does anybody do anything else", or
"If some people don't use A, then it must not be very good".

If I asked "If software is a good career, then why do some people
still raise crops?" you might laugh. Software is not for every-
body. For that matter, you know that managers and financial people
make more money than programmers, so whats wrong with you that you
don't go into management instead? Well, maybe programming is fine
for you. Maybe that money benefit isn't all that important to you.
Does that mean that managers *don't* make more money?

Likewise OO. It's not a winner-take-all battle with the rest
of the DP world. It doesn't have to be popular, and it doesn't
have to be used exclusively by all software companies in order
to work.

It's just a set of mental tools you can use if you want the
benefits it can provide.

Some people are gung-ho, some are skeptical, some don't care. And
they have that right.

--
Tim
********************************************************************
In some sense, all of life is design. The way we pick our friends,
the way we plant a garden, and the way we choose software is all
design. Sometimes we do things by habit, other times by carefully
weighing the pros and cons, and sometimes we make experiments.
-- Ralph Johnson --

Russell Corfman

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

It's in the proceedings from the PLoP-94 conference. The
proceedings are published by Addison-Wesley in the book
"Pattern Languages of Program Design" edited by
Coplien and Schmidt, ISBN 0-201-60734. Most good book stores
should have it and the PLoPD2 book from PLoP-95.

I believe there is a copy on the web that is reachable
from Coplien's homepage http://www.bell-labs.com/people/cope/
look for "organizational and process patterns"

Russell Corfman
corf...@agcs.com

In article <32A65E...@concentric.net>,

Bill Gooch

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

[followups trimmed]

Ell wrote:
>
> Bill Gooch (bi...@iconcomp.com) wrote:
> : Ahmed wrote:

> : ....

> : > Every programmer is tackling the same problem using his own perception
> : > of the problem..his own abstraction..
>
> : Yes, and the alternative is?...
>

> Relying on domain experts for fundamental application semantics and as
> well relying on domain experts to determine implementation necessaries for
> application use.

Fair enough. Also I would have said "sharing explicit
models among all interested parties" to avoid problems
that often arise due to miscommunication (or lack of
comunication).

> Relying on domain experts for implementation of Use Cases.

I'm not sure what "implementation of use cases" means
here - can you explain? I'm more familiar with the idea
of software analysts and designers implementing things
based on what they hear from domain experts and users.

> : > The concept behind OO is that it deals with peices of software as
> : > tangible objects exactly as real world works..
>
> Yes! Well not *exactly* as the real world operates, but in a way that
> utilizes, and is anchored upon "real world" domain abstractions,
> patterns, and semantics.
>
> : Not at all. "How the real world works" is by no means
> : obvious or well understood ("real world" in itself is
> : an exceedingly vague term), and you'd need to provide
> : some definitions of these things, as well as evidence
> : to support the above assertion.
>
> If we grasp, as in you have alluded to in many of your previous posts that
> development should start with understanding domain abstractions and
> relationships how is that different from basing project analysis and
> architecture on "tangible objects exactly as the real world operates".

Fine, all except for the quote, which I can't interpret in
a way that I'm comfortable with. Ill-defined terms combining
to form what looks to me like total nonsense. I can't answer
your question, because I don't know what the quote is trying
to say (your explanation is one of numerous possibilities).

> : > while in the OO software each object has a behaviour according to
> : > the perception of his designer..!!
>
> : Sometimes. The designer probably hopes it does.
>
> Yes, the pragmatists and empiricists hope that they can do whatever they
> want to analysis and physical design based on their narrow inclinations.

If you say so. (Here I have a strange unpleasant feeling of
deja vu.)



> In actuality there is an objective reality (or at the very least objective
> human conception) behind what goes on in an application and its domain
> that developers should attempt to model as closely as possible.

In actuality that is just one opinion. I'm curious,
though: what is "objective human conception?"

> : > The problem is that many organization avoid moving toword OO because
> : > the transfter cost to OO ( training programmers / organization change in
> : > standards / new tools / new analysis and design methods / legacy
> : > system/ etc. ) are much higher than the benifit of "immediate reuse"
>
> : OK - why is this a problem?
>
> Because "immediate reuse" should not be the only, or main criteria by
> which an organization adopts one development paradigm or another (e.g. OO

> vs. others), as I see it....

Good. What in your opinion are some other important criteria?

Actually my question was more about why it matters in general
whether or not "many organizations avoid moving toward OO."
But I like your answer, even if it's responding to a slightly
different question.

Bill Gooch

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

Brian Gridley wrote:
>
[ Ahmed wrote: ]

> >: > Every programmer is tackling the same problem using his own perception
> >: > of the problem..his own abstraction..
> >
[ Bill Gooch wrote: ]

> >: Yes, and the alternative is?...
> >
> Communication, teams, and a development environment 2 steps beyond ENVY.

Good. Can you say a bit more about this?

> If all the code that has been written in various shops became available to the
> general developing public within 3 months of it's creation, we would have spent
> far less time repeating ourselves, and have the most robust development
> environment and reuseable code, and have eliminated all other language
> competitors (not ST-based).

I don't think so (especially about the part in parens,
since Smalltalk environments aren't clearly superior
to all others - superior to most, yes).

> Instead, we are still all solving the same problems
> , over and over again. Is anyone interested in building a communal repository
> with a system for charging users and rewarding developers for code?

We're working on it. Stay tuned.

Daniel Drasin

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

Ahmed wrote:
>
> Hello Every Body
>
> I am a new research student working at the field of Object Oriented Technology...I have several
> critical opinions about Object Oriented in general, and I like to participate it with you and hear
> you expert comments and opinions

>
> Object Oriented Technology came with quite promising claims that if achieved can benefit the software
> development companies and organisations millions of pounds.
>
> Some of these claims for instance
> 1 - high reusability of objects and frameworks
> 2 - Resilience to change, i.e. low software maintenance and evolution cost
> 3 - Easier understanding by the user and Natural transition between the analysis, design,
> implementation because they all use tangible perceived objects.
>
> However the reality is not so bright as claimed..if so, then nobody today thought to develop a
> software on the traditional structural methods...
>
> My question is what is wrong with OO ? why it did not achieved its targets yet.?
> What are the main obstacles?
>
My $0.02.

The problems I've seen with OO projects arise not from the use of OO,
but from the misuse of OO. Programmers trying to use non-OO methods,
incorrectly applying OO concepts, etc. This is a result of a lack of
OO teaching at eductational institutions. Even schools that offer
1 or 2 OO language courses usually fail to educate; they use C++
and only really teach the "C" part. There are very few
universities that make an effort to inculcate students with an
understanding of OO techiniques and methods. So it's no wonder
when these graduates try to apply them in the "real world," they
get all fouled up.

Dan

--
Daniel Drasin Applied Reasoning
dra...@arscorp.com 2840 Plaza Place, Suite 325
(919)-781-7997 Raleigh, NC 27612
http://www.arscorp.com

Piercarlo Grandi

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

>>> "tbushell" == Tom Bushell <tbus...@fox.nstn.ns.ca> writes:

tbushell> On Wed, 04 Dec 1996 08:45:22 -0600, rma...@oma.com (Robert
tbushell> C. Martin)
wrote>

>> ha...@matilda.alt.net.au (Harry Protoolis) wrote:

harry> The traditional techniques all suffered from a number of
harry> significant flaws. Perhaps the most damaging one was what I
harry> (rather unkindly) think of as 'The glorification of idiots'
harry> phenomenon. What I mean by this is that projects were typically
harry> infested by a group of people who never wrote any software, but
harry> spent most of the budget drawing diagrams that the implementors
harry> never used.

But such people are not at all idiots: they are usually the cleverest
people on the project, from many points of view, especially
self-interest. :-)

rmartin> Much to my dismay, there are some OO methods that are promoting
rmartin> the same scheme. The "analyst" draw nice pretty little
rmartin> diagrams, and even run them through simulators to "prove" that
rmartin> they work. These diagrams are then run through a program that
rmartin> generates code. Programmers who maintain that code generator
rmartin> have to make sure that the "right" code is generated. They
rmartin> have to make the program work.

Both of these observations seem to me rather realistic, from direct and
indirect observation of actual projects.

tbushell> It is my growing opinion that this is a fundamental problem
tbushell> with all "formal" design methods, not just OO design.

This is in part what has made formal methods (as in correctness
proofs/verification) rather less popular than perhaps they should be:
the ``formal'' bit is as large as, and usually as unreliable as, the
``informal'' bit of a project (getting a specification or a proof right
is often about as hard, and sometimes harder, as getting the program
itself right)

tbushell> The effort involved in doing the design is as great or greater
tbushell> than doing the construction (coding).

That's quite well often the case -- now, if the design was _useful_,
then that would not be a problem. Unfortunately analisyses or designs
are often rather dramatically decoupled from each other and
implementation, for both technical and sociological (for example inane
adherence to the waterfall model) reasons, and so that effort is usually
largely wasted.

*If* analisys and design efforts were conducted in resonance with each
other and implementation, then spending more effort on those than coding
would be all fine and actually rather useful, for formulating solutions
in more abstract terms usually makes them easier to maintain and modify.

tbushell> Contrast this with doing the blueprints for a bridge - the
tbushell> design effort is orders of magnitude cheaper than the
tbushell> construction. (Or so I believe - a civil engineer might
tbushell> correct me on this).

It is usually _cheaper_, but on the other hand it might take _longer_.

Developing and then ``debugging'' a bridge design is a long and
difficult process, that involves a large number of considerations in
different fields, from economics to demographics to aestetics.

tbushell> Also, the OO design models I've studied don't seem to be very
tbushell> good maps of actual real world systems - there seems to be a
tbushell> big gap between high level architecture and running code. I
tbushell> believe there should be a fairly smooth continuim from high
tbushell> level to low level of detail.

Why?

tbushell> I'm starting to believe that design and code don't make sense
tbushell> as separate entities - the design should _become_ the code -
tbushell> the design documents for an implemented system are used as the
tbushell> foundation of the code, and then regenerated from the code.
tbushell> Major benefits would be that design work would not be
tbushell> discarded because it was too difficult to bring it up to date
tbushell> with reality. Therefore, the design should never get out of
tbushell> synch. This a similar idea to reverse engineering, but not
tbushell> identical.

This seems a bit fuzzy as a description, but reminds me of the
``corroboration'' approach from Dijkstra to program correctness: if one
develops programs using methodical techniques _starting_ from their
``proof'', then the correctness of the program is highly corroborated by
such a process.

tbushell> If anyone has knows of tools that would facilitate this
tbushell> approach, I'd certainly be interested. I've done some very
tbushell> simple prototypes, and hope to work on the idea in future
tbushell> (when I have more time - Hah!).

But OO is in large part about this: the ``high level''
modules/classes/prototypes are supposed to capture the essence of the
design. Pointing some sort of OO program browser to a program source and
removing from the picture the lower levels of abstraction *ought* to
reveal the design. This *ought* to be the case with structured
programming methods in general, and with OO in particular it should be
even more pleasant because of the disciplined modularization of the
program it entails.

Piercarlo Grandi

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

>>> "rmartin" == Robert C Martin <rma...@oma.com> writes:

[ ... ]

harry> In fact IMHO an OO team has no place for anyone who cannot do all
harry> three tasks. [Analysis, Design, and Implementation]

rmartin> Agreed, emphatically.

As much as I agree ith these wise words, that clearly arise out of a
solid amount of experience with the ``alternative'', I have to sadly add
here that sociological reasons make the ``alternative'' rather common;
career stratification,

harry> Jim Coplien wrote an excellent pattern called 'Architect also
harry> Implements' which covers very nicely the reasoning behind not
harry> allowing non-implementors to design systems.

rmartin> Software architects who do not implement will be ignored by the
rmartin> people who actually *do* implement. An architect cannot be
rmartin> effective unless he/she really understands the problems that
rmartin> the implementors are facing today, now, this minute.

I know both you and Harry already know this, but let me add for the sake
of completeness and of the record (and Ell :->): and viceversa!

Architecture, as you have so many times argued, is extremely important,
and the implementor that is not guided by sound architectural
principles, by close interaction with analisys and design, is not going
to do a nice implementation.

Which of course brings us back to the observation above: that
programming, and in particular OO with its great emphasis on structured,
modular, abstraction, requires the ability to understand and perform at
all three levels of discourse.

Nick Thurn

unread,
Dec 5, 1996, 3:00:00 AM12/5/96
to

IMO asking "what is wrong with OO?" is a bit like asking "what is wrong with
a hammer?"

My point is OO is only a tool. The user of a tool is fundamental to the results.
IMO if anything is wrong with OO it is inflated expectations. In the end
it is people who create software. Good people create good software in any
language/paradigm. Good OO looks deceptivly simple. Attaining simplicity is
the hard part. I think of it as the difference between whistling a tune and
writing a tune. Currently there are to many writers and not enough whistlers :)

Regarding reuse and reusability, there are two levels of reuse: personal and
strangers. Personal (including your team) reuse is pretty easy
to achieve, reuse by strangers is hard. Reusable is in the eye of the reuser
not the writer. All a writer can do is *attempt* to create reusable code it
is for others to decide whether it is reusable or not.

In C++ land the biggest barrier to reuse has been the proliferation of
proprietry container librarys (usually as part of a more high level
set of functionality). With the standard basically here this problem
may go away. We still have the problem that most (probably all) mature
librarys are written in legacy C++, when will they port across? when
will compilers be available that handle all the new features? when will
there be a standard ABI?

IMO OO is a great tool, if it is flawed it is the execution not the concept.
Oh well, back to the salt mines...

cheers
Nick (my opinions only)

Ell

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Tom Bushell (tbus...@fox.nstn.ns.ca) wrote:
:
: Robert C. Martin wrote:
:
: >Harry Protoolis wrote:
: >
: >> The traditional techniques all suffered from a number of significant
: >> flaws. Perhaps the most damaging one was what I (rather unkindly) think
: >> of as 'The glorification of idiots' phenomenon. What I mean by this is
: >> that projects were typically infested by a group of people who never
: >> wrote any software, but spent most of the budget drawing diagrams that
: >> the implementors never used.

So then we had the elevation of self-centered hackers, eh? Not that all
such plans were good, but the coders should be following some
architectural plan 95% of the time.

: >Much to my dismay, there are some OO methods that are promoting
: >the same scheme. The "analyst" draw nice pretty little diagrams, and
: >even run them through simulators to "prove" that they work. These
: >diagrams are then run through a program that generates code. Programmers
: >who maintain that code generator have to make sure that the "right" code
: >is generated. They have to make the program work.

Are you saying that iterative and incremenatl diagrams are always wrong or
that the code generator has problems? Or both?

: It is my growing opinion that this is a fundamental problem with all
: "formal" design methods, not just OO design. The effort involved in
: doing the design is as great or greater than doing the construction
: (coding). Contrast this with doing the blueprints for a bridge - the
: design effort is orders of magnitude cheaper than the construction.

This to me only shows that building software and building bridges are 2
different kinds of building activity. It does not impugn the efficacy of
determining sw project analysis and formulating a project architecture.

: (Or so I believe - a civil engineer might correct me on this). Also,
: the OO design models I've studied don't seem to be very good maps of
: actual real world systems - there seems to be a big gap between high
: level architecture and running code. I believe there should be a
: fairly smooth continuim from high level to low level of detail.

This is precisely what the 3 amigos and other OO gurus advocate in
their works.

: I'm starting to believe that design and code don't make sense as
: separate entities - the design should _become_ the code - the design
: documents for an implemented system are used as the foundation of the
: code, and then regenerated from the code. Major benefits would be
: that design work would not be discarded because it was too difficult
: to bring it up to date with reality. Therefore, the design should
: never get out of synch. This a similar idea to reverse engineering,
: but not identical.

Ditto my last remark.

Cheers,

Elliott

Ell

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Richie Bielak (ric...@calfp.com) wrote:
: Matthew Gream wrote:
:
: [...]
:
: > How flexible is the software? This has a lot to do with architecture:

: > "if you don't get the architecture right up front, you may as well pack
: > up and go home" [quote made by a fellow engineer].

: These kinds of statements always bother me. How are you supposed to
: know that the architecture (or design for that matter) is right?
:
: The only way I see is to implement it and see how it works. That's
: why the iterative software development makes sense, you get to try
: out out your ideas in practice and adjust them as needed.

The point as far as I'm concerned is that an architecture should guide
_all_ coding. That is even if the initial architecture is later modified,
or later scrapped.

Elliott

Tom Bushell

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

On 5 Dec 1996 03:56:02 GMT, e...@access2.digex.net (Ell) wrote:

>Even if "doing" good OO analysis (drawing "nice pretty little diagram" and
>*much* more) cost more time and dollars than OO coding, it pays off
>because doing good OO analysis is generally decisive to the most rapid and
>Use Case effective development of the project.

I agree that formal high level design _should_ pay off, but there is
ample anecdotal (as provided by other contributers to this thread)
that it doesn't always. Some might argue that this is because the
design process wasn't "good", but I wonder if this is just a symptom
of a deeper problem with formal design as it is currently practiced.
We seem to be modelling the civil engineers - draw the blueprints,
then build the bridge, and correct the blueprints "as built". But
software is bits, not atoms, and there may be better approaches.


>And if you think about it, the only real way for what you call "design" to
>"become" the code is if a perspective larger than being mainly focused on
>physical coding *leads* physical coding.

Agreed. I picture the design activity as creating some form of
outlines and diagrams, which would be successively refined down to the
level of running code. But you could "back up" (zoom out?) to the
higher levels of abstraction at any point in the development (or
maintenance) process.

>This is precisely the approach of Booch/Coad/Jacobson/Rumbaugh from what I
>know about their methods. Physical design and coding is based on/rooted
>in analysis concepts. Physical design and coding are simply additive to
>the analysis models.

I speak only from knowledge gleaned from magazine articles, but my
understanding is that the tools are lacking to provide the outline
style views I'm looking for, with the possible exception of
Together/C++.

IMO, most of these well known methods focus on the wrong things.
There seems to be far too much emphasis on inheritance relationships,
which is not extremely useful in my experience. I'd be much more
interested in tools that allowed me to work with dataflow and side
effects on persistant data - at various levels of abstraction.

Jeff Miller

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Bill Gooch wrote:
>
> Ahmed wrote:
> >
> > Actually immediat reuse can be acheived to a certain extent with
> > the traditional structural methods if they adopted a good design
>
> A key phrase here is "to a certain extent." OO allows
> more effective reuse (less redundancy, less copy-and-edit)
> than alternatives.

This reminds me of a thread we had on the comp.lang.c++ ng a couple
of years ago.

I maintained then, and continue to maintain, that the extent to
which OO tools incrementally promote reuse (C++ as compared to C,
to draw an obvious example) is not in and of itself sufficient to
have any meaningful impact in any large organization.

Effective reuse is substantially a management issue, not a technical
one. OO helps, but organizational and process changes are more
important.

Jeff Miller
Senior Server Architect
CSG Systems, Inc.
jeff_...@csgsys.com
jmi...@probe.net

Thomas Gagne

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Computer technology is treated little differently by the media than
politics, medicine, and/or consumer safety issues. Basically, most
pundits make their living exploiting the Chicken Little model. Those
with soft, impressionable minds believe what the read simply because
they read it.

Truth is, there's nothing wroing with OOT. There's nothing wrong with
client-server. But the reporting of the short comings and demises is
premature and irresponsible. There will always be high-profile
projects using the technology de'jour that fail for reasons other than
the short comings of the technology.

Whether your technology is assembler, C, RDBMS, Smalltalk, distributed
processing, or whatever, it can be sabotaged by;
mismanagement
inexperienced developers (in the technology)
poorly managed customer expectations
poorly trained users
poor documentation
did I mention mismanagement?
None of these incriminates technology. All of these afflictions exist
independent of the technology employed. Addmittedly, I have been a part
of projects that have (euphemistically) missed their objectives more
because of the misuse of technology than the fault of it.

Take for instance the many hurled afronts to RDBMS as being slow. I
still regularly read and hear of supposedly credible experts
recommending denormalization as a way of overcoming performance
problems. I've discovered the problem isn't with your engine, its how
you're using it. Have you tried using multiple connections executing
parallel queries to eliminate join processing? If that sounds to
difficult, maybe you should get a systems programmer to develop a
three-tier system for you so the more complex programming can be hidden
in a middle-tier (possibly where it belongs?) rather than in your
application.

People who have been a part of varied programming teams have often
learned that good programmers program well in any language. Bad
programmers program poorly in any language.

It's time to stop blaming the technology and get with the program. As
the adage says, "Those who say it can't be done should get out of the
way of those doing it."


Carl Weidling

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

In article <32a5ceba...@news.nstn.ca>,
Tom Bushell <tbus...@fox.nstn.ns.ca> wrote:
>On Wed, 04 Dec 1996 08:45:22 -0600, rma...@oma.com (Robert C. Martin)
>wrote:

>
>>ha...@matilda.alt.net.au (Harry Protoolis) wrote:
>>
>>> The traditional techniques all suffered from a number of significant
>>> flaws. Perhaps the most damaging one was what I (rather unkindly) think
>>> of as 'The glorification of idiots' phenomenon. What I mean by this is
>>> that projects were typically infested by a group of people who never
>>> wrote any software, but spent most of the budget drawing diagrams that
>>> the implementors never used.
>>
...<stuff deleted for brevity -cpw>

>
>It is my growing opinion that this is a fundamental problem with all
>"formal" design methods, not just OO design. The effort involved in
>doing the design is as great or greater than doing the construction
>(coding). Contrast this with doing the blueprints for a bridge - the
>design effort is orders of magnitude cheaper than the construction.
...<more stuff deleted>

>I'm starting to believe that design and code don't make sense as
>separate entities - the design should _become_ the code - the design
>documents for an implemented system are used as the foundation of the
...<rest of previous posting deleted>
I remember seeing a documentary about Gothic Cathedrals as
examples of engineering. The commentary compared the way models are
constructed first nowadays to test a design, and they showed models of
cathedrals going through stress tests, but for those medieval masons,
the building was also the engineering model. The flying buttresses for
instance, were added when the masons saw how wind was blowing down the walls.
On the other hand, wasn't there a famous example back in the 70s
when 'top-level design' was first being expounded, where some big project
for a newspaper or something was designed first and then coded and it worked.
I remember this being cited a lot when I first started programming, can
anyone recall details or hard facts about that?
--
Cleave yourself to logodedaly and you cleave yourself from clarity.

Harry Protoolis

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

On Thu, 05 Dec 1996 03:06:57 GMT, Tom Bushell <tbus...@fox.nstn.ns.ca> wrote:
>On Wed, 04 Dec 1996 08:45:22 -0600, rma...@oma.com (Robert C. Martin)
>wrote:
>
>>ha...@matilda.alt.net.au (Harry Protoolis) wrote:
>>
>>> The traditional techniques all suffered from a number of significant
>>> flaws. Perhaps the most damaging one was what I (rather unkindly) think
>>> of as 'The glorification of idiots' phenomenon. What I mean by this is
>>> that projects were typically infested by a group of people who never
>>> wrote any software, but spent most of the budget drawing diagrams that
>>> the implementors never used.
>>
>>Much to my dismay, there are some OO methods that are promoting
>>the same scheme. The "analyst" draw nice pretty little diagrams, and
>>even run them through simulators to "prove" that they work. These
>>diagrams are then run through a program that generates code. Programmers
>>who maintain that code generator have to make sure that the "right" code
>>is generated. They have to make the program work.
>
>It is my growing opinion that this is a fundamental problem with all
>"formal" design methods, not just OO design. The effort involved in
>doing the design is as great or greater than doing the construction
>(coding). Contrast this with doing the blueprints for a bridge - the
>design effort is orders of magnitude cheaper than the construction.
>(Or so I believe - a civil engineer might correct me on this). Also,
>the OO design models I've studied don't seem to be very good maps of
>actual real world systems - there seems to be a big gap between high
>level architecture and running code. I believe there should be a
>fairly smooth continuim from high level to low level of detail.

IMHO, the trick with OO design is to do it *informally* most of the
time. What I mean by this is that you sketch out high level architecture
and analysis results as high level class diagrams and then give them to
the 'implementors' to design and code.

The implementation team then does design as a series of informal
gatherings around a white board drawing state diagrams, detailed class
diagrams, object diagrams etc. You then photocopy and file the
whiteboard drawings and let the same group of people code from the
hand drawn design.

I then quite often reverse-engineer the resulting code and call *that*
the formal system design.

I do some formal tracking of this design process, but the golden rule is
that nothing should get in the way of the creative process. The failure
of *all* attempts at formal design is this mistaken belief that anything
short of coding can replace coding ...

>I'm starting to believe that design and code don't make sense as
>separate entities - the design should _become_ the code - the design
>documents for an implemented system are used as the foundation of the

>code, and then regenerated from the code. Major benefits would be
>that design work would not be discarded because it was too difficult
>to bring it up to date with reality. Therefore, the design should
>never get out of synch. This a similar idea to reverse engineering,
>but not identical.

My point is that the *design* and the *code* come into existance
together. To talk about the design becoming the code implies that it
exists before the code in some sense.

>If anyone has knows of tools that would facilitate this approach, I'd
>certainly be interested. I've done some very simple prototypes, and
>hope to work on the idea in future (when I have more time - Hah!).

I think that a facinating tool would be a language sensitive drawing
tool in which you could switch your view between diagrams and code and
write 'code' in either view.

H
-
Harry Protoolis alt.computer pty ltd
ha...@alt.net.au software development consultants


Harry Protoolis

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

On 6 Dec 1996 02:12:33 GMT, Ell <e...@access5.digex.net> wrote:
>Tom Bushell (tbus...@fox.nstn.ns.ca) wrote:
>:
>: Robert C. Martin wrote:
>:
>: >Harry Protoolis wrote:
>: >
>: >> The traditional techniques all suffered from a number of significant
>: >> flaws. Perhaps the most damaging one was what I (rather unkindly) think
>: >> of as 'The glorification of idiots' phenomenon. What I mean by this is
>: >> that projects were typically infested by a group of people who never
>: >> wrote any software, but spent most of the budget drawing diagrams that
>: >> the implementors never used.
>
>So then we had the elevation of self-centered hackers, eh? Not that all
>such plans were good, but the coders should be following some
>architectural plan 95% of the time.

I suppose I started the name-calling, so I can't complain can I :-)

In any event, that is not what I am advocating at all. What I am
suggesting is that OO techinques *can* be sold to implementors, because
good OO design *does* map to meaningful implementations. (unlike, IMHO,
any form of traditional structured design)

As a result, IME, you can convince your implementors to do design
themselves. This has the multiple benefits of getting better, more
meaninful designs done, *and* not getting them ignored.

If you then involve *the same group of people* in the up front problem
analysis you get a win-win-win scenario, as they are now basing
implementable designs on analysis results in which they have a stake.

This is the beauty of 'Architect-also-implements', the chief architect
can lead all three components of the process with some credibility, and
can handle the most awful of all questions asked to a 'pure' Analyst,
namely 'How the hell do I implement this ?', and avoid the most soul
destroying (and project killing) of all answers ...

'I don't care, that's just an implementation problem'.

H

p.s. Elliot, your newsreader is (still) badly broken, all your postings are
appearing out of sync in their respective threads. Are you using
'Followup' to respond to postings ?

Harry Protoolis

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Despite being a 'self centered hacker' (I like that title :-)) I
actually agree with you Elliot. You should at least sketch your
architecture out after your analysis is complete, (where analysis ==
primary use cases), and design to it.

The possibility of being wildly wrong in your first cut at architecture
is no excuse for not trying, after all if you are right then you win and
if you are wrong then at least you have something to iterate on.

H

Tom Bushell

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

On 05 Dec 1996 22:30:40 +0000, p...@aber.ac.uk (Piercarlo Grandi)
wrote:

>*If* analisys and design efforts were conducted in resonance with each
>other and implementation, then spending more effort on those than coding
>would be all fine and actually rather useful,

I agree that the effort is useful. But my gut feeling is that with
better (and apparently undiscovered, as yet) processes and tools, the
high level design activity should be about 10% of the total project,
not around 50%.

>tbushell> Contrast this with doing the blueprints for a bridge - the
>tbushell> design effort is orders of magnitude cheaper than the
>tbushell> construction. (Or so I believe - a civil engineer might
>tbushell> correct me on this).
>
>It is usually _cheaper_, but on the other hand it might take _longer_.

I assume this is because the design is the work of a much smaller
team, whose only physical output is computer models or paper. This is
my point - other engineering disciplines appear to routinely put much
less total effort into design, with much greater success. I guess
this is just the positive result of greater maturity as a discipline.

>tbushell> Also, the OO design models I've studied don't seem to be very
>tbushell> good maps of actual real world systems - there seems to be a
>tbushell> big gap between high level architecture and running code. I
>tbushell> believe there should be a fairly smooth continuim from high
>tbushell> level to low level of detail.
>
>Why?

Why not? ;-) (Don't know what you're asking here...)

>But OO is in large part about this: the ``high level''
>modules/classes/prototypes are supposed to capture the essence of the
>design. Pointing some sort of OO program browser to a program source and
>removing from the picture the lower levels of abstraction *ought* to
>reveal the design. This *ought* to be the case with structured
>programming methods in general, and with OO in particular it should be
>even more pleasant because of the disciplined modularization of the
>program it entails.

Absolutely! But why doesn't it work out that way?

Harry Protoolis

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

On Wed, 04 Dec 1996 16:35:54 +0000, Ahmed <ACQ...@shef.ac.uk> wrote:
>Harry Protoolis wrote:
>>
>> On Tue, 03 Dec 1996 17:38:37 +0000, Ahmed <ACQ...@shef.ac.uk> wrote:
>> However, the real question should be 'has OO made a significant positive
>> difference', and in my experience the answer is a resounding 'yes!'.
>>
>
>
>Dear Harry,
>I agree with you that OO has many advantages, but I can not feel that significant improvement
>as you said,
>
>The important question is how measure the success of OO,
>Can you please tell me on what crieteria you mesured this significant difference
>is it
>( code reusibility / software development time / software performace / software reliablity/
>software cost / software portablity / ...etc .. ) these issues that count for any organization
>
>actually I am looking for any references that compares " with figures and statistics"
>between different applications developped using OO and the traditional methods.

This, I think is the nub and crux of your problem. The gathering of real
empirical data on software development is difficult or impossible. Real
software developent companies do not have time to prepare these results
for publication, and usually consider them too commercially sensitive.

>All what I have found are examples that show OO is workable, for me this
>is not an evidence to the significant difference"

Sorry, if it's hard evidence you want you probably need to wait another
ten years or so at least. However, the anecdotal evidence is that OO is
at least as good at getting the job done as conventional techniques, and
(very) occasionally spectactularly better.

>Another thing, Since you are familiar with OO,
>Could you please tell me what is the best environment to develop an OO application,

:-), g++, vim, make, purify and ddd on a Sun Ultra Creator 3D with two
heads. (sorry guys, sparcworks CC is nice, but debugger *still* bites)

I find that Tools.h++ helps a lot, and when I can get it the STL.

>( in my case most of our applications are database systems )

Oh, and SYBASE with DbTools.h++ from Roguewave.

Seriously that is a very broad question, and depends a great deal on
your application domain. This sort of advice is worth what you pay for
it.

Cheers,
H

p.s. your lines are too long ...
_

Steve Heller

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Daniel Drasin <dra...@arscorp.com> wrote:

>The problems I've seen with OO projects arise not from the use of OO,
>but from the misuse of OO. Programmers trying to use non-OO methods,
>incorrectly applying OO concepts, etc. This is a result of a lack of
>OO teaching at eductational institutions. Even schools that offer
>1 or 2 OO language courses usually fail to educate; they use C++
>and only really teach the "C" part. There are very few
>universities that make an effort to inculcate students with an
>understanding of OO techiniques and methods. So it's no wonder
>when these graduates try to apply them in the "real world," they
>get all fouled up.

I agree. Moreover, instructors who DO attempt to teach "real" C++
(i.e., OO) programming run the risk of upsetting students who think
they already know how to program and other instructors who are still
not completely conversant with OO notions.


Steve Heller, author and software engineer
http://ourworld.compuserve.com/homepages/steve_heller


Nick Leaton

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

:

> : > How flexible is the software? This has a lot to do with architecture:
> : > "if you don't get the architecture right up front, you may as well pack
> : > up and go home" [quote made by a fellow engineer].
>
> : These kinds of statements always bother me. How are you supposed to
> : know that the architecture (or design for that matter) is right?
> :
> : The only way I see is to implement it and see how it works. That's
> : why the iterative software development makes sense, you get to try
> : out out your ideas in practice and adjust them as needed.
>
> The point as far as I'm concerned is that an architecture should guide
> _all_ coding. That is even if the initial architecture is later modified,
> or later scrapped.


But if you take an standard example, employee. Can you code up an
employee class without having an architecture? As an employee class
could be used in lots of different systems, all with different
architectures, you don't need to have an initial architecture to start
from. This is true of almost all components.

I don't see why there should be a distinction between top down and
bottom up. I use both in practice. Start off coding the obvious objects.
This gives you a feel for what you are doing. Then put them together in
some framework.
--

Nick

Ralph Cook

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Harry Protoolis wrote:
>
> On Wed, 04 Dec 1996 16:35:54 +0000, Ahmed <ACQ...@shef.ac.uk> wrote:
> >Harry Protoolis wrote:
> >All what I have found are examples that show OO is workable, for me this
> >is not an evidence to the significant difference"
>
> Sorry, if it's hard evidence you want you probably need to wait another
> ten years or so at least. However, the anecdotal evidence is that OO is
> at least as good at getting the job done as conventional techniques, and
> (very) occasionally spectactularly better.

And occasionally spectacularly worse. And to say that
individual projects like this have succeeded or failed does NOT
"prove", or even "show", that the language environment used is
"good" or "bad".

rc
--
When I DO speak for a company, I list my title and use "We".

David Bradley

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

The problem is that reuse through OOP takes effort. People seem not
willing to expend the effort to get the benefit of OOP. They put an
OOP in place and don't change their methods and then wonder why they
aren't able to reuse code. You never get something for nothing. If
you're not willing or unable to put forth the effort you'll never see
the benefit.

In order for something to be reused it has to be designed properly.
OOD is where most people fail. This takes effort. If you don't
expend this effort upfront you'll not get anything out.

Look at OOP as a lever. The work you put in will be multiplied out
the other end. If you don't put in the initial work on the front end
then you'll never get the benefit on the back end.

--------------------------------------------
David Bradley dav...@datalytics.com
Software Engineer
Datalytics, Inc. http://www.datalytics.com

Bill Gooch

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Jeff Miller wrote:
> ....
> Effective reuse is substantially a management issue, not a technical
> one. OO helps, but organizational and process changes are more
> important.

I think you may have missed the emphasis I was putting on
*immediate* reuse. This means reuse of software modules
(methods, interfaces, classes, etc.) within a single
application, and sometimes across similar applications
that are being developed concurrently. IME, this flavor
of reuse is primarily a technical (and technical project
management) issue, and not an organizational one.
"Process" at some level is always an issue, but then
that's not saying very much.

OTOH, I agree that large-scale (longer term, and/or
broader scope, e.g. framework-level) reuse requires
organizational compliance. But still, the technical
issues can't take too much of a back seat, or nothing
good will come of it. You may accomplish reuse with
organizational and process changes in the absence of
a strong technical understanding, but the stuff you'll
be reusing won't be worth reusing.

H Brett Bolen

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

I don't believe I getting involved with this. Anyway here's my
thoughts:

What are the 'Big Three' fundimental concepts with OO?

They are

Inheritance
Polymorphism
Encapsulation


I think that Inheritance struggles against Encapsulation. Instead
of a function being in a single file, it is spread out over
two, three or sometimes even 10 files ( depending upon the
'inheritance lattice'). Sometimes i find myself questioning which
of the 11 virtual functions do i need to fix.

I think better tools could help.

I think Inheritance is overused. It even can be harmfull to reuse
( but I won't get into THAT). When it's overused, it just makes
things complicated.

'Whats wrong with C++ and OOP?' is a way different question.
--
b\253 | Take Chances, Make Mistakes
bre...@cpcug.org | Get Messy
brett bolen | - Ms Frizzle - MSB
Walrus Consulting | http://www.cpcug.org/user/brettb

Ahmed

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to a.alk...@dcs.shef.ac.uk

Bill Gooch wrote:
>
> Ahmed wrote:
> >
> > Does this mean,If two organizations developed almost typical applications
> > does not mean that the objects developed can be reusable between them..
> > Is not this a deficiency in OO.
>
> As compared to what? Non-OO software? I think not.
>

Compared to the Object concept and its capability ..!!

> Two different automobile designs rarely share any
> compatible parts (except those which are industry-
> standardized, like oil filters), unless the designers
> worked together with that goal in mind.
>

I think this is a good example for comparison,

Does every car-company develop the car from a to z ? I doubt it ..!
There are special companies dedicated only to develop specific spare parts for the cars,

It is true that the spare parts are not compatible ( in general ) between different car models,
However almost all cars share the same level of abstraction,

when you say a "radiator" to a mechnical engineer he will immediately understand its functionality,
no matter if it is a Mercedece or Honda car.
When you say to a mechanic "piston" "shaft" "gear-box" "clutch" "handbrake" ...etc. ( you name it )
then an Immediate image will draw in his mind and will grasp a general perception

Even if you go to lower levels, each part has a name and a main function,

Without this common abstaction between cars then the job of any mechanics will be almost
imposible. Otherwise every car model will require a dedicated mechanical engineer.


> > Every programmer is tackling the same problem using his own perception
> > of the problem..his own abstraction..
>

> Yes, and the alternative is?...
>

In my openion, to get the advantage of OO capability, there must be an
agreement on the abstraction level of every domain. This should turn into standard abstaction
accessable for any softwaer developer. There should be an institute or company to take this
responsibility. Otherwise people will always reenvent the weel by inventing their trivial classes
localy which is a waste of a valuable resource .. ( Programmers )


> > The concept behind OO is that it deals with peices of software as
> > tangible objects exactly as real world works..
>

> Not at all. "How the real world works" is by no means
> obvious or well understood ("real world" in itself is
> an exceedingly vague term), and you'd need to provide
> some definitions of these things, as well as evidence
> to support the above assertion.
>

I feel that this is a more phylosifical answer than being practical.
Yes you need to provide some definitions for tricky words that might
give different semantics, however trivail words in the proper context
are self explainatory ..!!

> > however in real world
> > every object has a clear behaviour and perception by every body,
>
> Not in the slightest.


>
> > while in the OO software each object has a behaviour according to
> > the perception of his designer..!!
>
> Sometimes. The designer probably hopes it does.
>

> > The problem is that many organization avoid moving toword OO because
> > the transfter cost to OO ( training programmers / organization change in
> > standards / new tools / new analysis and design methods / legacy
> > system/ etc. ) are much higher than the benifit of "immediate reuse"
>
> OK - why is this a problem?

This means that OOP did not yet prove or show its greate advantages in many domains. So
It needs efforts more before being accepted widely. I believe that OO has the power to do so
but ( probably ) the wrong usage of it is preventing its remarkable success in certain
areas.


>
> > Another point regarding inheritance, we know that Visiual Basic does not
> > have the capability of inheritance, however you can build a system
> > much faster compared to using visiual C++ with much less code.
>
> Depends what system, doesn't it? VB isn't ideal for
> all computer applications; C++ is probably a better
> choice for at least some of them.
>

Agree with you that C++ is much more powerful that VB in certain areas ..
But What prevent an OOP from exceeding a non OOP in all areas ..?


Regards,
Ahmed

Myles Williams

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

In article <588g4v$j...@samba.rahul.net> Carl Weidling <c...@rahul.net> writes:
On the other hand, wasn't there a famous example back in the 70s
when 'top-level design' was first being expounded, where some big project
for a newspaper or something was designed first and then coded and it worked.
I remember this being cited a lot when I first started programming, can
anyone recall details or hard facts about that?

That would be the New York Times database, created by IBM circa 1970.
It was the first full-scale application of structured programming, and
was completed early and under budget with approximately 0.25
defects/kLOC.

--
Myles Williams "When you see me again, it won't be me."
http://funnelweb.utcc.utk.edu/%7Ewilliams/freeos | Guide to free
| operating system kernels

Ranjan Bagchi

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Myles Williams wrote:
>
> In article <588g4v$j...@samba.rahul.net> Carl Weidling <c...@rahul.net> writes:
> On the other hand, wasn't there a famous example back in the 70s
> when 'top-level design' was first being expounded, where some big project
> for a newspaper or something was designed first and then coded and it worked.
> I remember this being cited a lot when I first started programming, can
> anyone recall details or hard facts about that?
>
> That would be the New York Times database, created by IBM circa 1970.
> It was the first full-scale application of structured programming, and
> was completed early and under budget with approximately 0.25
> defects/kLOC.
>

Wow.. that's impressive.

It also indicates to me that flagship projects in any methodology
seem to be incredibly successful. Perhaps it's because world-class
engineers are working on it and they're just succesful regardless of
technology or tools or anything. The suggestion, though, may be that
success of projects done by mere-mortal engineers using a particular
methodology provide better data points.

How this applies to the success of O-O is interesting because I think
most people agree that misapplied O-O results in failed projects
which sully O-O's reputation. Is it easier for mere-mortals to
understand O-O?

Should mere-mortals be in the software development business at all is
another question, which is vaguely frightening.

-rj


Robert C. Martin

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

In article <587veh$n...@news3.digex.net>, e...@access5.digex.net (Ell) wrote:

> : Robert C. Martin wrote:
>
> : >Much to my dismay, there are some OO methods that are promoting

> : >the same scheme. The "analyst" draw nice pretty little diagrams, and
> : >even run them through simulators to "prove" that they work. These
> : >diagrams are then run through a program that generates code. Programmers
> : >who maintain that code generator have to make sure that the "right" code
> : >is generated. They have to make the program work.
>

> Are you saying that iterative and incremenatl diagrams are always wrong or
> that the code generator has problems? Or both?

I am saying that the creation of an elite "architecture team" composed
of members who:
1. Dictate their decisions to the programmers.
2. Do no programming themselves.
is a prelude to a disaster.

Architects must first be programmers. Architects must continue to write
some code. Architects must sell, or at least negotiate, their architectures
with the developers.

--
Robert C. Martin | Design Consulting | Training courses offered:
Object Mentor | rma...@oma.com | Object Oriented Design
14619 N Somerset Cr | Tel: (847) 918-1004 | C++
Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com

"One of the great commandments of science is:
'Mistrust arguments from authority.'" -- Carl Sagan

Prashant Gupta

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

H Brett Bolen wrote:
>
> I think that Inheritance struggles against Encapsulation. Instead
> of a function being in a single file, it is spread out over
> two, three or sometimes even 10 files ( depending upon the
> 'inheritance lattice'). Sometimes i find myself questioning which
> of the 11 virtual functions do i need to fix.
>
On the contrary, I think careful use of inheritance promotes
encapsulation. In any case, I don't think encapsulation means that
all the code will be in a single "file". After all, using separate
text files for different subclasses is just an implementation detail
specific to the language (eg C++). Encapsulation simply means that
logically related concepts, namely the data and the operations that
can be performed on the data, "live" together within a logical entity.
How that logical entity is stored on a physical device is an entirely
different matter. An inheritance hierarchy is certainly a well defined
logical entity within the OO paradigm where all of the polymorphic
methods can live.

Let us say we have an abstract superclass (base class) called
"Automobile" which provides a method called "Automobile>>#driveTo:"
[ Automobile::driveTo(const Destination *) ].
The actual implementation of the method needs to be deferred to the
concrete subclasses (derived classes) because
4speed/5speed/Nspeed/automatic transmission vehicles would presumably
work differently. Now even if this "driveTo" method is implemented
differently in 25 concrete subclasses, I don't see how that violates
encapsulation. More importantly, how would it promote encapsulation
if we were to not use inheritance and instead construct all of the
concrete classes from scratch. Wouldn't that require duplication of
code that is common to all Automobiles. Wouldn't that be a violation
of encapsulation in a sense.


>
> 'Whats wrong with C++ and OOP?' is a way different question.

No argument there. :-)

Prashant Gupta.

> --
> b\253 | Take Chances, Make Mistakes
> bre...@cpcug.org | Get Messy
> brett bolen | - Ms Frizzle - MSB
> Walrus Consulting | http://www.cpcug.org/user/brettb

--

The opinions expressed here are mine and do not necessarily reflect
the opinions of my employer.
----------------------------------------------------------------------
Prashant Gupta mailTo:p...@bbt.com (919) 405 4884
----------------------------------------------------------------------

David Bradley

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

>I assume this is because the design is the work of a much smaller
>team, whose only physical output is computer models or paper. This is
>my point - other engineering disciplines appear to routinely put much
>less total effort into design, with much greater success. I guess
>this is just the positive result of greater maturity as a discipline.

or the result of a more complex field.

Roger Vossler

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Tom Bushell wrote:
> It is my growing opinion that this is a fundamental problem with all
> "formal" design methods, not just OO design. The effort involved in
> doing the design is as great or greater than doing the construction

<snip>

> I'm starting to believe that design and code don't make sense as
> separate entities - the design should _become_ the code - the design
> documents for an implemented system are used as the foundation of the
> code, and then regenerated from the code. Major benefits would be

<snip>

> If anyone has knows of tools that would facilitate this approach, I'd
> certainly be interested. I've done some very simple prototypes, and
> hope to work on the idea in future (when I have more time - Hah!).

IMHO, this is more of problem with large CASE tools or design systems
than it is with design per say. As I understand it from the OO wizards,
start with a small number of use-cases in order to understand the
problem,
and if you go OO subsequently, then do some CRC card modeling.

Thus, all you need for a start is a stack of 3x5 index cards, pencil,
paper, and a couple of good books. Peter Coad has a lot of good things
to say about this. Then, when you understand what you are doing, commmit
to the CASE swamp of your choice.

The problem is that people first buy a killer tool chest and spend
large numbers of hour understanding how to use the beast and even
more hours stuffing a data base only to discover thay they have
created a big mess.

Cheers, Roger Vossler (vos...@csn.net)

Mukesh Prasad

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Tom Bushell wrote:
[snip]

> I'm starting to believe that design and code don't make sense as
> separate entities - the design should _become_ the code - the design
> documents for an implemented system are used as the foundation of the
> code, and then regenerated from the code. Major benefits would be
> that design work would not be discarded because it was too difficult
> to bring it up to date with reality. Therefore, the design should
> never get out of synch. This a similar idea to reverse engineering,
> but not identical.
[...]

I have seen the term "iterative prototyping" used to formally
describe an approach like this.

Iterative prototyping means you start from a
prototype of the system, written from a very sketchy
design (the design may even be just in peoples' heads.)
You use your prototype to write an improved
design, which you then use to implement improvements
to the prototype, which you then use to improve
your design.... repeat until your erstwhile
"prototype" becomes the "system" with a very
closely matching design spec, and everybody is happy.

In practice, I have seen a shortened version,
the "backward spec", i.e. specifications done
from implemenations (with modifications as required)
work very well in certain cases. Much better than the
strict "implement exactly from spec" approach.

I believe these less top-down approaches work better
because in a lot of cases, at specification time
the product is very vaguely understood. Moreover,
many implementation problems are not anticipated well.
An actual, physical implementation can sharpen everybody's
hazy understanding to the point where actually good design
decisions can be made. Thus doing the spec from an
initial implementation, and fixing the implementation
to match the final spec, can yield much
better results overall.

Of course, there is no reason why your design
couldn't be OO. The problems you describe,
I think, are not problems of OO, but rather
the problems of trying to do a detailed
design without sufficient understanding
of the product to be designed and its problems.

/Mukesh

Bill Gooch

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Ahmed wrote:
>
>....
> It is true that the spare parts are not compatible ( in general ) between different car models,
> However almost all cars share the same level of abstraction,
>
> when you say a "radiator" to a mechnical engineer he will immediately understand its functionality,
> no matter if it is a Mercedece or Honda car.
> When you say to a mechanic "piston" "shaft" "gear-box" "clutch" "handbrake" ...etc...

In software, any of the above would be a "pattern."

> Even if you go to lower levels, each part has a name and a main function,
> Without this common abstaction between cars then the job of any mechanics will be almost
> imposible. Otherwise every car model will require a dedicated mechanical engineer.

The analogy breaks down because all cars respond to
essentially the same set of general requirements, with
variations occurring at a more detailed level. Software
requirements vary widely at all levels, and thus the
*application* and *composition* of common patterns in
different pieces of code also vary accordingly. Cars
(and trucks) are all similar in many important respects,
in addition to the fact that the patterns involved in
their designs are pretty well and widely understood.
Your everyday mechanic will usually know most of the
patterns by heart, whereas a typical software engineer
is often learning new patterns with each new project.

> Yes you need to provide some definitions for tricky words that might
> give different semantics, however trivail words in the proper context
> are self explainatory ..!!

IMO words are never "self explanatory." It's in the
nature of perception that each individual has his or
her own distinct interpretation. But there is a wide
range of variation in the ambiguity of different words,
and "real" and "world" are both separately, and even
more so when used together, highly ambiguous. We tend
to ignore or even deny the ambiguity of words like these
that we use very frequently, but trying to define them
in very specific terms often illuminates the issue.

Todd Hoff

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Daniel Drasin wrote:
>

> The problems I've seen with OO projects arise not from the use of OO,
> but from the misuse of OO. Programmers trying to use non-OO methods,
> incorrectly applying OO concepts, etc. This is a result of a lack of
> OO teaching at eductational institutions. Even schools that offer
> 1 or 2 OO language courses usually fail to educate; they use C++
> and only really teach the "C" part. There are very few
> universities that make an effort to inculcate students with an
> understanding of OO techiniques and methods. So it's no wonder
> when these graduates try to apply them in the "real world," they
> get all fouled up.

If i invented a hammer and 90% of people couldn't use
it correctly would we blame the hammer or the people?
It seems those who've "got" OO blame the people. Maybe we
should blame the hammer. Maybe OO just won't work in
the mass market of building applications. Not that it
can't, but that it doesn't work often enough to make it
universally appropriate.


-------------------------------------------------------------
t...@possibility.com | The loyalty of small men can be
http://www.possibility.com | bought cheaply, for greed has no
| pride. - Michael Kube-McDowell

Robert C. Martin

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

> On 6 Dec 1996 03:26:57 GMT, Ell <e...@access4.digex.net> wrote:
> >Richie Bielak (ric...@calfp.com) wrote:
> >

> >The point as far as I'm concerned is that an architecture should guide
> >_all_ coding. That is even if the initial architecture is later modified,
> >or later scrapped.
>

> Despite being a 'self centered hacker' (I like that title :-)) I
> actually agree with you Elliot. You should at least sketch your
> architecture out after your analysis is complete, (where analysis ==
> primary use cases), and design to it.

Actually, I prefer to do much more than just "sketch it out", the architecture
should be very well defined, and very detailed. However, I also prefer to
begin producing the architecture well before all the analysis is complete.
And I prefer producing code that is conformant to the architecture - and the
analysis - long before the architecture is complete.

In other words, I like to do them all concurrently.

This doesn't mean that I *ever* code something that is not designed.
It also doesn't mean that I design something that has not been analyzed.
It's just that I don't wait for *all* the analysis to be completed before
I begin on designing the architecture. And I don't wait for the complete
architecture before I begin on the code.

Myles Williams

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

In article <32A863...@pobox.com> Ranjan Bagchi <ranjan...@pobox.com> writes:
> That would be the New York Times database, created by IBM circa 1970.
> It was the first full-scale application of structured programming, and
> was completed early and under budget with approximately 0.25
> defects/kLOC.
Wow.. that's impressive.
It also indicates to me that flagship projects in any methodology
seem to be incredibly successful. Perhaps it's because world-class
engineers are working on it and they're just succesful regardless of
technology or tools or anything. The suggestion, though, may be that

I tend to think it's because the people working on it are familiar
with the methodology in its original incarnation, before marketers and
"armchair experts" corrupt it. That's why, whenever someone needs
an explanation of OO, I direct them to the work by Parnas and co. in
the 70s.

Bob Crispen

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

There's one OOD problem that so far I haven't seen mentioned
that's been the death of at least two programs I know of.

Object-Oriented Design is a paradigm, implemented by various
methodologies, in order to PRODUCE A GOOD DESIGN.

The go/no-go question that MUST be asked of an OO design
(the fruits of an OO methodology or process) is not "Does
this design scrupulously adhere to the tenets of our
particular sect of OOD?" but rather "Is this a GOOD DESIGN?"

Forgive my shouting, but it is perfectly possible to lose
sight of the goal in the quest for methodological
scrupulousness.

It's also perfectly fatal.
--
Rev. Bob "Bob" Crispen
cri...@hiwaay.net
"A polar bear is just another way of expressing a rectangular bear."

David B. Shapcott [C]

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

In article <32a5ceba...@news.nstn.ca>,

Tom Bushell <tbus...@fox.nstn.ns.ca> wrote:
>
>It is my growing opinion that this is a fundamental problem with all
>"formal" design methods, not just OO design. The effort involved in
>doing the design is as great or greater than doing the construction
>(coding). Contrast this with doing the blueprints for a bridge - the
>design effort is orders of magnitude cheaper than the construction.
>(Or so I believe - a civil engineer might correct me on this). Also,
>the OO design models I've studied don't seem to be very good maps of
>actual real world systems - there seems to be a big gap between high
>level architecture and running code. I believe there should be a
>fairly smooth continuim from high level to low level of detail.

>
>I'm starting to believe that design and code don't make sense as
>separate entities - the design should _become_ the code - the design
>documents for an implemented system are used as the foundation of the
>code, and then regenerated from the code. Major benefits would be
>that design work would not be discarded because it was too difficult
>to bring it up to date with reality. Therefore, the design should
>never get out of synch. This a similar idea to reverse engineering,
>but not identical.

In bridge engineering, design and construction take place in different
mediums, unlike software engineering. A software engineer designs and
implements on the same medium, a computer, providing an opportunity to
at least *partially* translate design information directly into
implementation. Mechnical translation is less error prone than humans
and far less labor intensive.

The complete implementation cannot be synthesized mechanically, but
such translation should be maximized. If the developers *depend* on
design information, they will keep it up to date (they really have no
choice). IME, flow control seems to be the correct level for
partitioning.

The concept you refer to is termed `round trip engineering' (RTE).
Although trying to synthesize up-to-date design from the end product
(i.e. reverse engineering) is the wrong way to achieve RTE. RTE
works best when changing a design product is the only and *best*
method for effecting change in the implementation. (This reverses
the path of generation you propose.)

Manual update and synchronization of design products with
implementation never gets done. Never, IME. Reverse engineering only
provides an up-to-date design product at the end of the implementation
phase (IMO, synchronization should be continuous) -- although RE helps
immensely when the developers are dealing with legacy or third party
products.

Some RTE schemes also suffer because the reverse engineering relies too
heavily on the code generator. Even minor changes to the generated code
can defeat reverse engineering.


--
D. Brad Shapcott [C] Contractor, Motorola Cellular Infrastructure Group

"Theory changes the reality it describes."

"brian_c._miller_(volt_computer)_(exchange)"

unread,
Dec 6, 1996, 3:00:00 AM12/6/96
to

Ranjan Bagchi <ranjan...@pobox.com> wrote in article
<32A863...@pobox.com>...


> Myles Williams wrote:
> > That would be the New York Times database, created by IBM circa 1970.
> > It was the first full-scale application of structured programming, and
> > was completed early and under budget with approximately 0.25
> > defects/kLOC.
>

> It also indicates to me that flagship projects in any methodology
> seem to be incredibly successful. Perhaps it's because world-class
> engineers are working on it and they're just succesful regardless of
> technology or tools or anything. The suggestion, though, may be that

> success of projects done by mere-mortal engineers using a particular
> methodology provide better data points.

It's not that a world-class engineer is working on the problem is why it
succeeds. It is because it is being *engineered* by someone who
understands the problem and has taken the concept of software engineering
to heart. I have had the experience of a project working the first time
that I removed all of the parser (syntactical) errors from my code. I ran
it against the tests, and all of them passed. Yes, the first time around
there were zero errors.

When I review the software I have written, I can make a rule: My best
programs are those which I engineered, and the worst are those I didn't. I
once maintained two programs I wrote for over three years. One was
engineered all the way, the other started small and grew like mold. The
one which was still readable, understandable, and easily modified was the
one I engineered. The other was difficult to follow and hard to modify.

The last job I had I worked on a piece of software which was written a
decade ago and upgraded/modified/fixed in the mean time. Working on it was
a real nightmare. Remember: 80% of a program's life will be in
maintenance. Therefore, smart managers and programmers will do all they
can to reduce that cost up front.


> How this applies to the success of O-O is interesting because I think
> most people agree that misapplied O-O results in failed projects
> which sully O-O's reputation. Is it easier for mere-mortals to
> understand O-O?

Yes, "mere mortals" can and do use O-O design. Yes, you *do* need to have
a good "design head" around, i.e., somebody who can visualize the design
well. Don't let a stupid person do this. Stupid people create stupid
designs, no matter what technology, process, methodology, or language they
use. (I think Scott Adams also remarked on this in "The Dilbert
Principle")

When I start designing software, I use the Warnier-Orr and Jackson
Structured Programming methodologies, state machines, language models,
distributed processes, and a personal variant of Booch (it's something I
started doing before Booch came along, it's similar according to the
magazine articles I read, and I haven't bothered to buy his book yet).

Yes, top-down design works, but the catch to all of it is that you have to
>design< (engineer, whatever) the whole program. It does no good to
engineer part of it, and write the rest on an ad-hoc basis. Don't do that,
you'd be asking for trouble. All of the people on the project must take
software engineering to heart. If there are members on the team who either
don't care to engineer the software or are morons, get rid of them, they
will only be trouble.


> Should mere-mortals be in the software development business at all is
> another question, which is vaguely frightening.

Why, of course they should! Just beware the morons. Remember, there is a
difference between a structured program and a program's structure.

What I learned from college was "follow the data" (rather like the
Watergate invesigative reporters' "follow the money"). Understand the
data, and the design shall become clear to you. (Hmmm, combination of
Nixon and Zen. I wonder what's for lunch! ;) Write tests for the modules
of the code as you go along. The principles of software engineering can be
applied to anything you come accross, be it object-orientated,
client-server, multi-threaded, whatever. Remember that it also takes
creativity and imagination to achieve a good design, not just brute force.

.


Ell

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

Piercarlo Grandi (p...@aber.ac.uk) wrote:
: "rmartin" == Robert C Martin <rma...@oma.com> writes:
: > Harry wrote
: >> In fact IMHO an OO team has no place for anyone who cannot do all
: >> three tasks. [Analysis, Design, and Implementation]

: > Agreed, emphatically.

It seems you all are not considering all factors here. For instance,
because someone is a good Java programmer does not necessarily mean they
are good at working with users to formulate analysis, or that they have
good architectural skills for medium sized or large projects. And vice
versa.

Some people are better at some things than others and some people
shouldn't do certain things at all.

: Architecture, as you have so many times argued, is extremely important,
: and the implementor that is not guided by sound architectural
: principles, by close interaction with analisys and design, is not going
: to do a nice implementation.

If you are speaking of Martin, he has only accepted that project coders
should be required to follow architecture within the last 6 months
(partially at my urging). WRT analysis he has never to my knowledge
accepted that an overall analysis should be done at the outset of a
project and that it should lead the creation of project architecture.

Elliott

Nick Thurn

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

Todd Hoff wrote:
>
> If i invented a hammer and 90% of people couldn't use
> it correctly would we blame the hammer or the people?
> It seems those who've "got" OO blame the people. Maybe we
> should blame the hammer. Maybe OO just won't work in
> the mass market of building applications. Not that it
> can't, but that it doesn't work often enough to make it
> universally appropriate.
> Todd,

I think it is the expectations placed on the people.
Michaelangelo user a hammer and chisel to produce great art.
I use it to bang a hole in my kitchen wall. If I expected
great art I would be disappointed.

The expectation that *all* those who use OO should be producing
reusable, high quality stuff is false however it appears IME
to be (or at least have been initially) the case.

For the "average" programmer OO should be mainly using not
creating. Of course it's chicken and egg, something must be
created to be reused.

cheers
Nick (my opinions only)
"If I had a hammer, I'd hammer in the ...."

Harry Protoolis

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

On 7 Dec 1996 02:02:05 GMT, Ell <e...@access2.digex.net> wrote:
>Piercarlo Grandi (p...@aber.ac.uk) wrote:
>: "rmartin" == Robert C Martin <rma...@oma.com> writes:
>: > Harry wrote
>: >> In fact IMHO an OO team has no place for anyone who cannot do all
>: >> three tasks. [Analysis, Design, and Implementation]
>
>: > Agreed, emphatically.
>
>It seems you all are not considering all factors here. For instance,
>because someone is a good Java programmer does not necessarily mean they
>are good at working with users to formulate analysis, or that they have
>good architectural skills for medium sized or large projects. And vice
>versa.

This is a question of direction. I accept that there will be individuals
who can only do implementation under the guidance of more senior
designer/implementors, and that possibly not all of your
designer/implementors might do analysis. These are acceptable tradeoffs
to the reality of staffing projects.

However, I would never, ever, ever hire an individual who claimed to
be a 'software architect' who was not *at least* a competent programmer,
and I would try hard to find one who was an exceptional programmer.
I would expect a person calling themselves an 'analyst' to be
either a competant designer/implementor or have very strong domain
specific skills, or better yet both. If they only have the latter then
their role would, by necessity, be limited.

>Some people are better at some things than others and some people
>shouldn't do certain things at all.

Sure, but IME if you can write software, you can't be an 'architect' period.

>: Architecture, as you have so many times argued, is extremely important,
>: and the implementor that is not guided by sound architectural
>: principles, by close interaction with analisys and design, is not going
>: to do a nice implementation.

>If you are speaking of Martin, he has only accepted that project coders
>should be required to follow architecture within the last 6 months
>(partially at my urging). WRT analysis he has never to my knowledge
>accepted that an overall analysis should be done at the outset of a
>project and that it should lead the creation of project architecture.

See, you said it again. 'project coders' as if they were a separate
bunch of people. If you make the 'Architecture Team' the core of the
'Coding team' then the issue of 'requiring' does not come up. Of course
the implementation will follow the architecture, if the architecture is
being developed by the same people who are leading the implementation effort.

H
-

Harry Protoolis

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

Point taken, I guess what I am saying is that the hard statistical
evidence needed doesn't exist yet. It is too early to call OO a failure,
and given all we have is anecdotal evidence, IME the balance is on the
positive side

Kazimir Majorinc

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

Hello!

I spent last two years in programming (for me) very complex structures in
C++, and before that in Borland's Object Pascal. Although I was very
enthusiastic with OO in the beginning, I'm losing this day by day. Before
few months I thinked that Smalltalk could be better than C++, now I am in
doubt with this. Listen why.

1. My analyse of my work shows that I will wrote programs faster if I use
some of good old procedural languages, like Modula-2. I know how you
could criticize me, but I'm exposed here. 8-)

2. Encapsulation, I mean that both data and functions are together in
object seems to me like very unnatural shape today. Look at
mathemathics. Mathematical language do not use that paradigm, although
things which are described there are more complex than any software.
They use something which looks like procedural paradigm with operators.
It talks to me that, paradoxaly, object paradigm could work only for
simple problems, but for complex, we have to come back to procedural.
Looking just hierarchicaly, functions should be one level higher than
data. Moreover, it is especially unnatural that one object contains
functions which uses many others. If I overload operators, for
example + in C++, I have disgusting when I use object model, and I have
to prefer first element, when there is absolutely no reasons for that.
If I could choose, I use procedural overload. If is necessary to make
groups of functions, it is better to use some sort of packages. Class
with lot of function members look really unnatural. I believe Smalltalk
is even worse here.

3. Polymorphism. The greatest part of OO. I understand wish, but look at
C++. Why if I want to do this things, I have to do it implicitely. I
mean why functions which overload each other should have same name? It
is better to do it explicitely, to say which function is overload of
which. Now things could be simpler. I do not know how to do it in
procedural paradigm, but I believe that it is somehow possible.

4. Inheritance. It seems OK.

5. Constructors, Destructors. They are great!

6. Messages. I do not know a lot of this, but Idea that object change
himself alone remembers me at the days of programming on TI57, or in
assembler, when every instruction is on so called accumulator. OO wants
accumulator back.

I would apprettiate one copy of answer privately (because news from my
server expire fast), and use my name, that I could find you with Deja
News. Of course, public answer is even better.

_______________________________________________
Author: Kazimir Majorinc
E-mail: Kazimir....@public.srce.hr
kma...@public.srce.hr (slightly better)
http: //public.srce.hr/~kmajor (~7min to USA)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One who knows the secret of the 7th stair

Steve Heller

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

Todd Hoff <t...@possibility.com> wrote:

>Daniel Drasin wrote:
>>

>If i invented a hammer and 90% of people couldn't use
>it correctly would we blame the hammer or the people?

If I invented an electron microscope and 90% of people couldn't use
it, would we blame the electron microscope or the people? In other
words, the complexity of the job that the tool needs to do matters as
well.

>It seems those who've "got" OO blame the people. Maybe we
>should blame the hammer. Maybe OO just won't work in
>the mass market of building applications. Not that it
>can't, but that it doesn't work often enough to make it
>universally appropriate.

Or maybe it's just taught poorly. I've been pretty successful in
teaching OOP to people who don't know it already. However, it takes an
awful lot of work on both the instructor's and the student's part, as
well as a proper approach. That is, rather than my trying to cram
dozens of constructs down the student's throat, we take them slowly
and in the proper sequence. The student won't know as many facts when
we get done as if he'd read a "Learn C++ in Five Seconds" book, but he
or she will KNOW and UNDERSTAND the material I've taught.


Steve Heller, author and software engineer
http://ourworld.compuserve.com/homepages/steve_heller


Steve Heller

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

rma...@oma.com (Robert C. Martin) wrote:

>I am saying that the creation of an elite "architecture team" composed
>of members who:
> 1. Dictate their decisions to the programmers.
> 2. Do no programming themselves.
>is a prelude to a disaster.

>Architects must first be programmers. Architects must continue to write
>some code. Architects must sell, or at least negotiate, their architectures
>with the developers.

Absolutely correct in all regards.

Tansel Ersavas

unread,
Dec 7, 1996, 3:00:00 AM12/7/96
to

... Disucssion deleted

> >It seems those who've "got" OO blame the people. Maybe we
> >should blame the hammer. Maybe OO just won't work in
> >the mass market of building applications. Not that it
> >can't, but that it doesn't work often enough to make it
> >universally appropriate.
>
> Or maybe it's just taught poorly. I've been pretty successful in
> teaching OOP to people who don't know it already. However, it takes an
> awful lot of work on both the instructor's and the student's part, as
> well as a proper approach. That is, rather than my trying to cram
> dozens of constructs down the student's throat, we take them slowly
> and in the proper sequence. The student won't know as many facts when
> we get done as if he'd read a "Learn C++ in Five Seconds" book, but he
> or she will KNOW and UNDERSTAND the material I've taught.
>

Ladies and Gentlemen,

Before we discuss 'what is wrong with OO', shouldn't we discuss what is
wrong with the current popular paradigm, procedure orientation.
Most of us seem to have forgotten it is just a historical accident that
we program the way we do in procedural approach, and if we are in a
total mess as software developers, one of the biggest reasons is our
insistence on procedure orientation. Most of the current OO
implementations carry a legacy of people with procedure oriented
background, and that reflects quite badly on the projects. Languages
that easily allow such escapes also contribute to this phenomena.

Once we accept the problems of the procedure orientation (see my
previous posting to comp.object about the subject) then we can look at
the alternatives.

When we conquer new places, the first to go usually perish. It is tough
to be a pioneer. However, with persistence and perseverence, we build
necesary access and support systems, and that new place becomes very
hospitable and finally, a source of great wealth. Luckily, time for
being a pioneer in OO is just about to pass. Now comes better times and
even prosperity.

There are three major reasons why OO projects fail. All of them are
stated by the great wisdom of Jedi in "Star Wars".

These are:
"Do or do not. There is no try"
Using my tools and techniques, I can prove you that I can produce
better and faster systems using OO (Please read my notes at the end
of this message). If I can do it, so you can.If you just try to do
it, you will fail. Be determined to do it.

"You must unlearn what you have learned"
People cling so heavily to the baggage they have been carrying,
they can not have an open mind about OO. SO the first thing I do
in my training sessions is to create doubts and questions about
the problems of the procedural approach, and why procedure
orientation is a very ineffective technique for most new problems.
Of course, you should have a very good mentor that is capable of
demonstrating these in practical terms.

"You must believe in what you are doing"
OO will help you. It will feel awkward at times, but you must
persist with it. You will be eventually rewarded.

Coming to the question of "What is wrong with OO" the question should
read "What are the problems in the current state of OO that slows down
it's progress".

There three major problems that slows down OO.
. Lack of expertise, personal and team skills (human issues)
. Lack of fast, efficient and practical tools-environments that make
programming one of the the most labor-oriented, miserable works
available Today
. Lack of practical OO application techniques and ways that will
integrate OO with other succesful paradigms

Current state of OO suffers from all of the above. Each and every one of
these problems are soluble, Indeed as a company, we are working on and
have at least intermediate solutions for all of them.

BTW I get a much better response for OO from children. For that reason,
I'll offer educational versions my tools and techniques to schools so
that children can be exposed to these techniques before their minds are
clutterd by the current dominant paradigms.

Tansel Ersavas
RASE Inc.
mailto:tan...@deep.net
http://www.rase.com/

Tansel Ersavas

unread,
Dec 8, 1996, 3:00:00 AM12/8/96
to

Dave Griffiths wrote:

> The last OO project I worked on was a spectacular disaster. A "generic"
> reservations system for one of the biggest entertainment companies in the
> world. It was cancelled a couple of months ago after millions down the
> drain. You won't read about it anywhere because these things aren't discussed
> publically. That in itself is part of the OO "problem" - you only hear about
> the successes. This particular project failed through lack of a coherent
> technical vision. There was nobody "in charge", just a bunch of developers
> making it up as they went along. They were pretty talented, but that RAD
> approach simply doesn't scale up for large projects.

I think that there are a few points here that should be discussed.

1. "The cancelled projects with money down the drain" is not a part of
the OO problem, but a general IS one. I know two 100 million projects
cancelled after 5 years of great hopes, money and sweat, and they
weren't OO projects. Every year, hundreds of BILLIONS of dollars are
down the drain because of blown IS budgets and cancelled projects. In
fact I would doubt very much that cancelled OO projects to successful OO
projects ratio to be anything near cancelled traditional style projects
vs overall traditional projects. The only reason we hear about failed
traditional systems is that there are so many of them and they are much
more spectacular than the one you are mentioning. Still only a very few
of them get a mention.

2. You were very right to observe the reasons as to why the project
failed. However, this is a more general syndrom of industrial era
mentality applied to complex systems.
As we move in the post industrial era, most of our problems come from
our inability to create new types of organizations that can deal with
such situations.

3. I am not a particular fan of the RAD approach. It doesn't imply OO,
in fact, OO is only later tucked on to it. RAD is Today's techniques
applied to Yesterday's organization structure. OO is not essential to
the RAD, and If RAD fails, the failure can not be attributed to its
tucked on OO component.

4. You were right to point out scaling as the biggest problem. However,
OO, though as practiced Today it has its scaling problems, properly done
has a much bigger chance of scaling than the traditional approach.

> OO is pretty easy to sell though. You get the salesman to fly in from
> California, take the managers out to lunch, talk loudly, smoke big cigars
> and so on, then it's back to the conference room for a demo "blah blah,
> ten times faster than existing development environments, blah blah", they
> show how you can build a GUI interface to the database with just a few
> drag'n'drops. Managers are bewitched, they agree to a pilot project to
> build a prototype, in come the consultants to head up the project, they
> knock up a few bubble diagrams, the prototype gets built and looks better
> than anything the managers have ever seen and that's it, they're hooked.

I'm glad to hear that. In my time it was much harder.

> By the time it all goes horribly wrong, the fast talking consultants who
> did the original "design" are long gone (natch) and are now trying to sell
> someone their Web based solution...

Hey, they have to live, life goes on. What's wrong if they costed a few
millions to be wasted on one client's expense?
Frankly, I couldn't have kept a straight face if I were responsible of
such a failure.

Anyway, I think that for OO to succeed, there are certain prerequisites.

1. People and the organization that they are in
2. Proper techniques (such as OO)
3. Tools

People and organizations issue is the most important issue. If you get
the best people and put them into an average organization, they will be
wasted. Why:
Current organizations are based on Smith, Fayol and Taylor's industrial
era principles. These principles are based not on trust and liberty, but
suspect and control. It is a restrictive rather than nourishing
environment. In this environment, people are restricted by the system
they are in, and act accordingly.
I personally witnessed a case where in a big organization three of my
friends were working on a project they knew it wouldn't be succesful,
because by the time they had finished it, the system that they were
developing the software for would be outdated, and being a legacy
propriotory system, they wouldn't be able to replace hardware, and there
would be no cheap way to port the software. They tried to explain it to
the management, but not wholeheartedly. Nobody listened. As nobody owned
the system, software was finished, and shelved.

In this type of organization the process is divided into little steps of
each would be performed by a non-overlapping team. Only the managers
high up should have the overall vision and knowledge of which most of it
already filtered going up and down; therefore collective vision and
knowledge is nowhere to be found.

Alternative structures are networking organizations. Indeed, there are a
few companies enjoying benefits of human networking, and ours strives to
be one of them.

As for OO, even if it doesn't work, it should work. I saw it working,
and I know it is repeatable.

Tools are another chapter, indeed another book, that can help us out of
our current misery. However, I don't have time to elaborate on that.

Overall, it is a jigsaw, if parts are missing it doesn't look good.

Remember, reading the same Bible, some go to crusade and kill thousands,
some become loving caring people. Using the same knife, a chef creates
masterpieces that are delicious, yet some others use it to threaten
others. Techniques are just like that. Not magic wands. People can use
OO to drown themselves and others in a spectacular mess, or glorify
themselves and their organization. Lets create an unified vision for the
people first, and than expect good results.

Finally, if OO won't work, what are the alternatives, certainly not the
old ways.

Kind Regards

Chuck Rabb

unread,
Dec 8, 1996, 3:00:00 AM12/8/96
to


Thomas Gagne <tga...@ix.netcom.com> wrote in article
<32A7AC...@ix.netcom.com>...

> Whether your technology is assembler, C, RDBMS, Smalltalk, distributed
> processing, or whatever, it can be sabotaged by;
> mismanagement (etc...)

I agree. We have been working on a project to re-write all of out business
systems
(currently written in Informix 4gl and C) in Smalltalk, using the Fusion
design methodology.
The problems we have had, and the reason we are failing (my opinion) is not
technology based.
It's managements inability to set a course and follow it for any length of
time, and also upper
management being snowballed by the trade rags constants promise of "fast"
development.


> Take for instance the many hurled afronts to RDBMS as being slow. I
> still regularly read and hear of supposedly credible experts
> recommending denormalization as a way of overcoming performance
> problems. I've discovered the problem isn't with your engine, its how
> you're using it. Have you tried using multiple connections executing
> parallel queries to eliminate join processing? If that sounds to
> difficult, maybe you should get a systems programmer to develop a
> three-tier system for you so the more complex programming can be hidden
> in a middle-tier (possibly where it belongs?) rather than in your
> application.

Here Here.

Piercarlo Grandi

unread,
Dec 8, 1996, 3:00:00 AM12/8/96
to

>>> "tbushell" == Tom Bushell <tbus...@fox.nstn.ns.ca> writes:

tbushell> On 05 Dec 1996 22:30:40 +0000, p...@aber.ac.uk (Piercarlo
tbushell> Grandi) wrote:

pcg> *If* analisys and design efforts were conducted in resonance with
pcg> each other and implementation, then spending more effort on those
pcg> than coding would be all fine and actually rather useful,

tbushell> I agree that the effort is useful. But my gut feeling is that
tbushell> with better (and apparently undiscovered, as yet) processes
tbushell> and tools, the high level design activity should be about 10%
tbushell> of the total project, not around 50%.

Perhaps the reverse: if the tools were really advanced, perhaps
including a program generator (and despite claims to the contrary no
such thing has been yet produced), then high level design activity would
be almost all the project. As things stand, human work on the ``lower''
levels of a project is indispensable, and communication problems between
the humans doing the various levels of abstraction, if the total effort
is divided in groups corresponding to such levels of abstraction, can
cause embarassing problems, as Robert Martin has always observed.

tbushell> Contrast this with doing the blueprints for a bridge - the
tbushell> design effort is orders of magnitude cheaper than the
tbushell> construction. (Or so I believe - a civil engineer might
tbushell> correct me on this).

pcg> It is usually _cheaper_, but on the other hand it might take
pcg> _longer_.

tbushell> I assume this is because the design is the work of a much
tbushell> smaller team, whose only physical output is computer models or
tbushell> paper.

Not quite, because while _building_ the bridge is an almost-mechanical
project, designing it requires a lot of hard thought, and consultation
with users, and in particular lots of iterations and refinements.

However designing a bridge and building it are not a good analogy for
analysis/design and vs. coding; more like an analogy for all three of
analisys/design/coding vs. execution.

Naturally building a bridge from blueprints is not quite as mechanical
as executing a program, for bridges are built by human teams using quite
but not-so-precise blueprints. Still it has the characteristics of
execution: an abstraction is turned into an instance (and potentially
many instances, in the case of things other than bridges of course).

tbushell> This is my point - other engineering disciplines appear to
tbushell> routinely put much less total effort into design, with much
tbushell> greater success. I guess this is just the positive result of
tbushell> greater maturity as a discipline.

Well, my impression is exactly the opposite: that the design of material
entities like a new car or an airplane model requires immense amount of
money and time, as compared to almost any software project, and as many
iterations, and as much debugging, if not more, and then there are as
many *design* bugs (as opposed to manufacturing defects) in the finished
products. The saving grace is that cars and other physical systems like
a building (but surely not airplanes) are usually, but not often simpler
than doing software.

Even in *cost* designing a new car or airplane can be a significantly
large part of the cost of each instance of the design. I would even
argue that the percentage of the sale price of instances that repays the
development cost can be significantly higher for cars or airplanes than
for software.

Then while a large part of the sale price covers instantion costs for
cars and airplanes, the instantiation costs of software are very
small, and most of the sale price covers marketing expenses and
profit; in this software is more like soft drinks and perfumes,
physical products sold on their intangible value, than cars or
airplanes.

tbushell> Also, the OO design models I've studied don't seem to be very
tbushell> good maps of actual real world systems - there seems to be a
tbushell> big gap between high level architecture and running code. I

This is a good reason why architectures as maps of real world system are
not such a good idea.

tbushell> believe there should be a fairly smooth continuim from high
tbushell> level to low level of detail.

piercarl> Why?

tbushell> Why not? ;-) (Don't know what you're asking here...)

I am asking for any argument to support the statements you make. You
support your observations with reference to "my gut feeling", "Or so I
believe", and "I believe there should be".

This is all fine, but then you should provide some argument as to why
your gut feelings or beliefs are; for they are about some points where
plausibility can work either way, and it's hard to trust one's own
instincts in such matter.

pcg> But OO is in large part about this: the ``high level''
pcg> modules/classes/prototypes are supposed to capture the essence of
pcg> the design. Pointing some sort of OO program browser to a program
pcg> source and removing from the picture the lower levels of
pcg> abstraction *ought* to reveal the design. This *ought* to be the
pcg> case with structured programming methods in general, and with OO in
pcg> particular it should be even more pleasant because of the
pcg> disciplined modularization of the program it entails.

tbushell> Absolutely! But why doesn't it work out that way?

Because achieving this requires hard thinking. This is typically beyond
the state of the art.

Or perhaps because the rather vague statements by those who believe in
``silver bullets'', in particular those with ``real world modeling'' on
them, mean that many people don't focus hard enough on the structure of
programs _as such_; there is evidence as to what is a good structure for
a model of the ``real world'', and then that this would also be a good
structure for a program. There is instead some sparse but good evidence
about what is a better structure for a program as such.

Harry Protoolis

unread,
Dec 9, 1996, 3:00:00 AM12/9/96
to

On Fri, 06 Dec 1996 16:36:36 -0600, Robert C. Martin <rma...@oma.com> wrote:
>In article <slrn5afin1...@matilda.alt.net.au>, ha...@alt.net.au wrote:
>
>> Despite being a 'self centered hacker' (I like that title :-)) I
>> actually agree with you Elliot. You should at least sketch your
>> architecture out after your analysis is complete, (where analysis ==
>> primary use cases), and design to it.
>
>Actually, I prefer to do much more than just "sketch it out", the architecture
>should be very well defined, and very detailed. However, I also prefer to
>begin producing the architecture well before all the analysis is complete.
>And I prefer producing code that is conformant to the architecture - and the
>analysis - long before the architecture is complete.
>
>In other words, I like to do them all concurrently.
>
>This doesn't mean that I *ever* code something that is not designed.
>It also doesn't mean that I design something that has not been analyzed.
>It's just that I don't wait for *all* the analysis to be completed before
>I begin on designing the architecture. And I don't wait for the complete
>architecture before I begin on the code.

Sure, but I tend to do a overall sketch to give me a big picture before
diving in to the detail at any point. I find this helps to size the
problem up.

One force at this point in a project lifecycle is the need to estimate
effort. I usually use a preliminary analysis as a significant input to
the estimation process, but you can rarely afford to do very detailed
architectural work up front (prior to contract signing).

As a result I find you begin the 'natural' process with at least a
preliminary analysis. From that I tend to sketch architecture, then pick
the hardest/highest risk and begin the iterative process of detailed
analysis, architecture, design and implementation.

The initial sketch helps to give some overall uniformity to the
architecture without being a straitjacket to the process.

Todd Hoff

unread,
Dec 9, 1996, 3:00:00 AM12/9/96
to

Steve Heller wrote:
> If I invented an electron microscope and 90% of people couldn't use
> it, would we blame the electron microscope or the people? In other
> words, the complexity of the job that the tool needs to do matters as
> well.

The analogy doesn't hold as an electron microscope is a specialized
tool. OO is not supposed to be a specialized tool but a general
methodology for designing and implementing software systems.

> Or maybe it's just taught poorly. I've been pretty successful in
> teaching OOP to people who don't know it already.

Or the flip side, why is it so hard to learn? And if it
really takes top teachers working over extended periods
of time with individual students to learn OO what is the chance
of it being taught properly in the large?

Nigel Tzeng

unread,
Dec 9, 1996, 3:00:00 AM12/9/96
to

In article <slrn5ai5m2...@matilda.alt.net.au>,

Harry Protoolis <ha...@alt.net.au> wrote:
>On Fri, 06 Dec 1996 09:09:54 -0500, Ralph Cook <ralph...@mci.com> wrote:
>>Harry Protoolis wrote:

[snip]

>Point taken, I guess what I am saying is that the hard statistical
>evidence needed doesn't exist yet. It is too early to call OO a failure,
>and given all we have is anecdotal evidence, IME the balance is on the
>positive side

FWIW In Rise and Resurrection Ed Yourdon has an excerpt from "Survey
of Advanced Technology" by Chris Pickering for the years 1991 and
1993.

The top performer in 1991 was OO/OOPS with percentage used 3.8,
percentage succeeded 91.7 and effective penetration 3.5.

In 1993 the worst performer was OO/OOPS with percentage used 11.9,
percentage succeeded 66.3 and effective penetration of 7.9%.

As a reference Structured Methods had a 84.2 success rate in 1993.
RDBMS were the top performer of that year at 96.0 (Gee...I guess we
finally know how to write and use RDBMS eh?).

I never did bother to find the original study so I don't know the
sample size, how he gathered data and so forth.

As with all statistics YMMV.

Harry Protoolis

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

On 7 Dec 1996 07:28:21 GMT, Harry Protoolis <ha...@matilda.alt.net.au> wrote:
>
>Sure, but IME if you can write software, you can't be an 'architect' period.
^^^
Oops, I mean 'can't' (of course) :-)

H
-


Harry Protoolis alt.computer pty ltd

ha...@alt.net.au software development consultants


Steve Heller

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

Todd Hoff <t...@possibility.com> wrote:

>Steve Heller wrote:
>> If I invented an electron microscope and 90% of people couldn't use
>> it, would we blame the electron microscope or the people? In other
>> words, the complexity of the job that the tool needs to do matters as
>> well.

>The analogy doesn't hold as an electron microscope is a specialized
>tool. OO is not supposed to be a specialized tool but a general
>methodology for designing and implementing software systems.

All programming is difficult. Programming is a specialized task,
which implies the need for specialized tools. Designing good libraries
is even more difficult than writing good programs, and it requires
talents in addition to those needed for application programming.
Therefore, library design should be done primarily by specialists.
This will make the job of the application programmer easier, not
harder.


>> Or maybe it's just taught poorly. I've been pretty successful in
>> teaching OOP to people who don't know it already.

>Or the flip side, why is it so hard to learn? And if it
>really takes top teachers working over extended periods
>of time with individual students to learn OO what is the chance
>of it being taught properly in the large?

I haven't had an "extended period" to teach OO, if by that you mean
more than a couple of months. Of course, the students will have to
apply what they learn in the real world in order to be truly fluent
with their new knowledge, but this is true of any applied field.
However, the necessity for good teachers is a real constraint, since
it appears that many "OO teachers" don't understand it very well
themselves. Another part of the problem is poor textbooks that don't
give enough background and depth so that the students can really grasp
the fundamentals; I'm doing what I can to fix that.

Snowball queries

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

Todd Hoff wrote:

> Or the flip side, why is it so hard to learn? And if it
> really takes top teachers working over extended periods
> of time with individual students to learn OO what is the chance
> of it being taught properly in the large?

It is NOT hard to learn. What is hard is how to UNLEARN. I have been
teaching OO for a long time and the problem IS that we have learned
procedure orientation first, which interferes a lot with OO teachings.
Though it is such an awkward modeling, we spend years and years learning
it, then stick to it as it were the ten commandmends. I discovered that
I can teach OO to children very quickly whereas sometimes it takes a
considerable amount of time to initiate professionals to OO.
OO is much more closer to the human thinking and reasoning process than
the procedure oriented approach. Good OO teachers first explain well why
procedure orientation is a historical accident, and they convince their
audience as to why procedure oriented approach is in fact a terribly
inefficient way of modeling large systems. Then they introduce OO.

Todd Hoff

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

Piercarlo Grandi wrote:

> Perhaps the reverse: if the tools were really advanced, perhaps
> including a program generator (and despite claims to the contrary no
> such thing has been yet produced), then high level design activity would
> be almost all the project.

If you mean a magic machine that eats a spec and
spits out perfect code for a target then you are right,
no such thing has been produced. But i have created many
times, as have others, domain specific languages where
custom made code generators can be plugged in that automate
large chunks of a system. Programmers can specify what
they want in the language and a few people in the project
can work on the code generators for the features. Works
like a charm. Unfortunatetly it is not "coding" so most
managers and programmers see such approaches as a
waste of time.

Art Schwarz

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

!> People who have been a part of varied programming teams have often
!> learned that good programmers program well in any language. Bad
!> programmers program poorly in any language.

And the same applies to 'designers' and 'design methodologies'. Too
often a good design methodology is condemned by a bad design and a
bad design methodology is advanced by a good design.


Tom Bushell

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

On 6 Dec 1996 07:08:24 GMT, ha...@matilda.alt.net.au (Harry Protoolis)
wrote:

>IMHO, the trick with OO design is to do it *informally* most of the
>time. What I mean by this is that you sketch out high level architecture
>and analysis results as high level class diagrams and then give them to
>the 'implementors' to design and code.
>
>The implementation team then does design as a series of informal
>gatherings around a white board drawing state diagrams, detailed class
>diagrams, object diagrams etc. You then photocopy and file the
>whiteboard drawings and let the same group of people code from the
>hand drawn design.

Interesting! How many projects have you done this way?


>I then quite often reverse-engineer the resulting code and call *that*
>the formal system design.

Can you give a rough estimate of the level of effort required to do
this reverse engineering, as a percentage of total project effort? I
assume you believe this is more cost effective than doing a more
formal design up front.


>My point is that the *design* and the *code* come into existance
>together. To talk about the design becoming the code implies that it
>exists before the code in some sense.

Doesn't this contradict your previous description of the informal
process you follow - the first "design" is the hand drawn sketches,
which you then give to your coders, who will probably modify the
design as they code. So it _does_ exist before the code.

Some interesting semantic issues here - what is meant by "design" and
"code"? The biggest distinction I tend to make is that a "design"
artifact is at a higher level of abstraction, and not runnable;
whereas "code" is runnable, and at the lowest level of abstraction. I
may have to reconsider these definitions - they tend to blur together
with the tools I'm proposing.


>I think that a facinating tool would be a language sensitive drawing
>tool in which you could switch your view between diagrams and code and
>write 'code' in either view.

This is very much as I envision it. What set me down this road of
thought was my experience doing high level design for the Prograph
class library, which was developed here in Nova Scotia. Prograph is a
truly visual language *at the code level*, not just a GUI builder
tacked onto an evolved version of FORTRAN, like VB/VC++/Delphi et al.
This experience opened my eyes to what is possible. If the "code" is
a "diagram", then the "design" is just another diagram at a higher
level of abstraction, and it should be possible to move back and forth
at will.

Interestingly enough, another local person, Randy Giffen, has
developed a browser that lets you take existing textual Smalltalk code
and display it visually, and modify it or write new code totally
within the visual environment. He says he hardly ever looks at the
textual code any more - it's easier to write in the visual mode, and
it's easier to understand existing code that way as well. Haven't had
a chance to play with it yet, but the demo he gave was _very_
impressive!

So, the pieces are all there, someone just has to put them together.

-Tom


----------------------------------------------------------
Tom Bushell * Custom Software Development
Telekinetics * Process Improvement Consulting
2653 Highway 202 * Technical Writing
RR#1, Elmsdale, NS
B0N 1M0
(902)632-2772 Email: tbus...@fox.nstn.ns.ca
----------------------------------------------------------

Tom Bushell

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

On Fri, 06 Dec 1996 11:18:43 -0500, Mukesh Prasad
<mpr...@dma.isg.mot.com> wrote:

>
>In practice, I have seen a shortened version,
>the "backward spec", i.e. specifications done
>from implemenations (with modifications as required)
>work very well in certain cases. Much better than the
>strict "implement exactly from spec" approach.

The problem is that with current tools available to the average
developer, this is a manual step. Most shops don't have the
discipline to do it, so we end up with the current situation - the
only accurate description of the system is the code, which is at too
low a level of abstraction to easily develop system level
understanding.


>I believe these less top-down approaches work better
>because in a lot of cases, at specification time
>the product is very vaguely understood. Moreover,
>many implementation problems are not anticipated well.
>An actual, physical implementation can sharpen everybody's
>hazy understanding to the point where actually good design
>decisions can be made.

Agree 100%.

>Thus doing the spec from an
>initial implementation, and fixing the implementation
>to match the final spec, can yield much
>better results overall.

Even better - eliminate the spec/implementation dichotomy. The spec
is just an "outline", if you will, of the implementation, and remains
as an intregal part, automatically tracking the implementation and
instantly viewable at any time.

I don't see any reason why we can't do this - to use Fred Brook's
terms, it's just an "accidental" complexity, not an an "essential"
one.

Tom Bushell

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

On Fri, 06 Dec 1996 14:45:05 -0600, Roger Vossler <vos...@csn.net>
wrote:

>Then, when you understand what you are doing, commmit
>to the CASE swamp of your choice.
>
>The problem is that people first buy a killer tool chest and spend
>large numbers of hour understanding how to use the beast and even
>more hours stuffing a data base only to discover thay they have
>created a big mess.

Agreed. But I think there _may_ be additional, perhaps more
fundamental problems:

1. The design/implementation dichotomy that works well in other
engineering disciplines does not map to software.

2. The design representations advocated by the gurus are not
appropriate or sufficient for real systems - they don't map to the
"code" in a useful way.

Think we've got a new thread here - "What's wrong with formal design?"

Piercarlo Grandi

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

>>> "piercarl" == Piercarlo Grandi <pier...@sabi.demon.co.uk> writes:
piercarl> Or perhaps because the rather vague statements by those who
piercarl> believe in ``silver bullets'', in particular those with ``real
piercarl> world modeling'' on them, mean that many people don't focus
piercarl> hard enough on the structure of programs _as such_; there is
piercarl> evidence as to what is a good structure for a model of the
^
|
little
piercarl> ``real world'', and then that this would also be a good
piercarl> structure for a program. There is instead some sparse but good
piercarl> evidence about what is a better structure for a program as
piercarl> such.

Roger Vossler

unread,
Dec 10, 1996, 3:00:00 AM12/10/96
to

Tom Bushell wrote:
> 1. The design/implementation dichotomy that works well in other
> engineering disciplines does not map to software.
>
> 2. The design representations advocated by the gurus are not
> appropriate or sufficient for real systems - they don't map to the
> "code" in a useful way.
>
> Think we've got a new thread here - "What's wrong with formal design?"
>
> -Tom
As far as point 1 is concerned, I would like to known more about
where engineering design/implementation dichotomy breaks down
with software. OTOH, I have no doubt that software could use
a strong dose of engineering discipline.

Concerning point 2: wow, is this ever true. I read the books and
papers by the gurus and then program using several different
languages, frameworks, IDEs, etc. with the result that it takes
real work to bridge the gap. A lot of arm waving takes place
between OOA/D and working with a real system.

Roger Vossler, vos...@csn.net

Ell

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

Harry Protoolis (ha...@matilda.alt.net.au) wrote:
: Robert C. Martin <rma...@oma.com> wrote:
: >
: >However, I also prefer to

: >begin producing the architecture well before all the analysis is complete.
: >And I prefer producing code that is conformant to the architecture - and the
: >analysis - long before the architecture is complete.
: >...
: >It's just that I don't wait for *all* the analysis to be completed before

: >I begin on designing the architecture. And I don't wait for the complete
: >architecture before I begin on the code.

: Sure, but I tend to do a overall sketch to give me a big picture before
: diving in to the detail at any point. I find this helps to size the
: problem up.

I think the word "overall" is key here vs. what is said immediately above
it. Overall meaning, as I see it, at least covering all points percieved
to be major. In my and others experience, no "new" application production
coding should go on before having done overall application analysis and
architecture. This helps to avoid unnecessary re-work among other
benefits.

Elliott

Todd Knarr

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

In <32a98036...@news.nstn.ca>, tbus...@fox.nstn.ns.ca (Tom Bushell) writes:
>1. The design/implementation dichotomy that works well in other
>engineering disciplines does not map to software.

>Think we've got a new thread here - "What's wrong with formal design?"

I think the problem with formal design goes right to your point #1,
since in most other enginerring disciplines there *isn't* the strong
dichotomy between design and implementation. Think about an architect
designing a bridge, and how much he has to know about the actual
construction methods and materials involved to come up with a design
that can be built without falling down. No self-respecting architect
or mechanical engineer would, for instance, decide that stone is pretty
and would fit well with the landscape but arches and intermediate
support pylons wouldn't, so "we'll build a 1000 foot stone bridge as a
single span with no arches under it".

All too often, though, the "systems analysts" hand me a design document
which I'm supposed to implement which is exactly the programming
equivalent of that 1000-foot single-span stone bridge.

--
Todd Knarr : tkn...@xmission.com | finger for PGP public key
| Member, USENET Cabal
***** Unsolicited commercial e-mail proof-read at $50/message *****

Seriously, I don't want to die just yet. I don't care how
good-looking they are, I! don't! want! to! die!"
-- Megazone ( UF1 )


Nick Leaton

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

Piercarlo Grandi wrote:

> Perhaps the reverse: if the tools were really advanced, perhaps
> including a program generator (and despite claims to the contrary no
> such thing has been yet produced), then high level design activity would
> be almost all the project.

Available now, called a programer.
--

Nick

Nick Leaton

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

> If you mean a magic machine that eats a spec and
> spits out perfect code for a target then you are right,
> no such thing has been produced. But i have created many
> times, as have others, domain specific languages where
> custom made code generators can be plugged in that automate
> large chunks of a system. Programmers can specify what
> they want in the language and a few people in the project
> can work on the code generators for the features. Works
> like a charm. Unfortunatetly it is not "coding" so most
> managers and programmers see such approaches as a
> waste of time.

Sometimes this can be a superb techinique. For example interfacing
with a DB. Write a schema file. Your code generator spits out code
from the schema file. If you have a bug in your design, it is easy
to change the generator, and fix all the relavent code.

--

Nick

Ell

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

Todd Knarr (tkn...@xmission.com) wrote:

: In <32a98036...@news.nstn.ca>, tbus...@fox.nstn.ns.ca (Tom Bushell) writes:
: >1. The design/implementation dichotomy that works well in other
: >engineering disciplines does not map to software.

: >Think we've got a new thread here - "What's wrong with formal design?"

: All too often, though, the "systems analysts" hand me a design document

: which I'm supposed to implement which is exactly the programming
: equivalent of that 1000-foot single-span stone bridge.

The role of the system architect(s) is to translate analysis into physical
design. It's nice when the analysts know, but any self-respecting
architect should definietly know about technical building materials.

Elliott

Tom Bushell

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

On 08 Dec 1996 18:44:08 +0000, pier...@sabi.demon.co.uk (Piercarlo
Grandi) wrote:

>tbushell> I agree that the effort is useful. But my gut feeling is that
>tbushell> with better (and apparently undiscovered, as yet) processes
>tbushell> and tools, the high level design activity should be about 10%
>tbushell> of the total project, not around 50%.
>
>Perhaps the reverse: if the tools were really advanced, perhaps
>including a program generator (and despite claims to the contrary no
>such thing has been yet produced), then high level design activity would
>be almost all the project.

Good point, although I suspect the bulk of the effort would be in what
is currently called "detailed" design. This is what I would like to
see happen.

>However designing a bridge and building it are not a good analogy for
>analysis/design and vs. coding; more like an analogy for all three of
>analisys/design/coding vs. execution.

I _think_ you are saying you believe _building_ a bridge is analogous
_executing_ a program. If so, my reply would be that executing a
program is more like opening a bridge to traffic - construction is
complete and it has been turned over to it's intended users.

>Well, my impression is exactly the opposite: that the design of material
>entities like a new car or an airplane model requires immense amount of
>money and time, as compared to almost any software project, and as many
>iterations, and as much debugging, if not more, and then there are as
>many *design* bugs (as opposed to manufacturing defects) in the finished
>products.

You may be right. I've never seen any statistics on how much of the
new product development effort can be attributed to design, but I know
it is significant.

>tbushell> Also, the OO design models I've studied don't seem to be very
>tbushell> good maps of actual real world systems - there seems to be a
>tbushell> big gap between high level architecture and running code. I
>
>This is a good reason why architectures as maps of real world system are
>not such a good idea.

Interesting point. If you are saying that architecture/civil
engineering are perhaps not the best fields to look to for
inspiration, I'm starting to agree with you. Seems there may be more
profit in looking to mechanical engineering and biology - both deal
with much more dynamic real world objects. A software system is more
like a machine or an organism than like a bridge. Thanks for that
insight!

>
>tbushell> believe there should be a fairly smooth continuim from high
>tbushell> level to low level of detail.

The point I was trying to emphasize was my perception that there seems
to be a big chasm between the high level design models currently
advocated, and running code. Perhaps this is why "over the wall"
processes fail - only the architect has enough understanding to make
the leap, so he/she must also implement.

This makes me suspect one or more of the following are true:

1. Current high level design models are inappropriate
2. Additional high level design models are required to supplement
existing models
3. "Intermediate" design models are required to bridge the gap between
high level and detailed design/coding.

>I am asking for any argument to support the statements you make. You
>support your observations with reference to "my gut feeling", "Or so I
>believe", and "I believe there should be".

Quite deliberate choice of words on my part. My intuitions are
_usually_ correct, which has served me well in predicting
technological trends. But I have only my limited experience to go on
with modern design methodologies, and was trying to get some harder
data, or at least anecdotal evidence to substantiate or refute my
hunches.

>tbushell> Absolutely! But why doesn't it work out that way?
>
>Because achieving this requires hard thinking. This is typically beyond
>the state of the art.
>
>Or perhaps because the rather vague statements by those who believe in
>``silver bullets'', in particular those with ``real world modeling'' on
>them, mean that many people don't focus hard enough on the structure of
>programs _as such_; there is evidence as to what is a good structure for
>a model of the ``real world'', and then that this would also be a good
>structure for a program. There is instead some sparse but good evidence
>about what is a better structure for a program as such.

Again, a good point. I read this to mean you might be in agreement
with my point #2 above about current models being insufficient.
Perhaps we should be using software patterns to constrain the
allowable design models that can be produced, so they will be more
likely to be implementable.

Alan Meyer

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

In article <58lbbo$8...@news.xmission.com>, tkn...@xmission.com wrote...
<<snip>>

>All too often, though, the "systems analysts" hand me a design document
>which I'm supposed to implement which is exactly the programming
>equivalent of that 1000-foot single-span stone bridge.

I once visited a large municipal government computing shop with 130 people
working there. I was told by the boss that as far as he's concerned, his
"systems analysts" are to do all the thinking and his programmers, he
called them "coders", are just supposed to translate those lofty thoughts
into code. He then thought that the reason the average programmer only
stayed 18 months (remember that's the average, I wonder what the good ones
were doing!) was because that was the nature of the business and programmers
were defective people anyway!

I personally believe that the division into "analysts" and "programmers" is
a dangerous one. If a person can't do both he is likely to do a lot of harm
to a project. An "analyst" that doesn't understand programming will often
specify impractical designs. A "programmer" that can't understand the
needs of the users will often build unusable programs. The best systems
always come from people who make it their business to understand the total
problem from the point of view of the user, the point of the view of the
machine, and everything in between.


Matt Kennel

unread,
Dec 11, 1996, 3:00:00 AM12/11/96
to

Nick Leaton (nic...@calfp.co.uk) wrote:
: Piercarlo Grandi wrote:

: > Perhaps the reverse: if the tools were really advanced, perhaps


: > including a program generator (and despite claims to the contrary no
: > such thing has been yet produced), then high level design activity would
: > be almost all the project.

: Available now, called a programer.

Exactamundo. The 'tool' is known as a "programming language".

I don't understand the obsession with "high level design tools" outside
programming languages.

Programming langauges *are* the proper "high level design tool", and despite
seeming fuddy-duddy and old-fashioned, progress in programming language has
always been, and will continue to be, the most potent means to deliver the
fruits of research to programmers.

If you take as an axiom that "humans will always have to make some decisions"
then this conclusion will follow.

The real improvements in programming come when interesting new concepts are
made into technology.

: --
: Nick

--
Matthew B. Kennel/m...@caffeine.engr.utk.edu/I do not speak for ORNL, DOE or UT
Oak Ridge National Laboratory/University of Tennessee, Knoxville, TN USA/
I would not, could not SAVE ON PHONE, |==================================
I would not, could not BUY YOUR LOAN, |The US Government does not like
I would not, could not MAKE MONEY FAST, |spam either. It is ILLEGAL!
I would not, could not SEND NO CA$H, |USC Title 47, section 227
I would not, could not SEE YOUR SITE, |p (b)(1)(C) www.law.cornell.edu/
I would not, could not EAT VEG-I-MITE, | /uscode/47/227.html
I do *not* *like* GREEN CARDS AND SPAM! |==================================
M A D - I - A M!


Don Harrison

unread,
Dec 12, 1996, 3:00:00 AM12/12/96
to

Roger Vossler wrote:

:I have no doubt that software could use


:a strong dose of engineering discipline.

It already exists. It's called "Design by Contract" a la Eiffel. :)

That alone is not sufficient, of course. There are many other factors that
can be brought to bear in the development of high quality software, not the
least of which are common sense and discipline.


Don.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Don Harrison do...@syd.csa.com.au

Ell

unread,
Dec 12, 1996, 3:00:00 AM12/12/96
to

m...@sjx-ixn10.ix.netcom.com>
Organization: The Universe
Distribution:
Alan Meyer (ame...@ix.netcom.com) wrote:
: In article <58lbbo$8...@news.xmission.com>, tkn...@xmission.com wrote...

: <<snip>>
: >All too often, though, the "systems analysts" hand me a design document
: >which I'm supposed to implement which is exactly the programming
: >equivalent of that 1000-foot single-span stone bridge.

: I once visited a large municipal government computing shop with 130 people
: working there. I was told by the boss that as far as he's concerned, his
: "systems analysts" are to do all the thinking and his programmers, he
: called them "coders", are just supposed to translate those lofty thoughts
: into code. He then thought that the reason the average programmer only
: stayed 18 months (remember that's the average, I wonder what the good ones
: were doing!) was because that was the nature of the business and programmers
: were defective people anyway!

Quite a narrow minded manager.

: I personally believe that the division into "analysts" and "programmers" is


: a dangerous one. If a person can't do both he is likely to do a lot of harm
: to a project. An "analyst" that doesn't understand programming will often
: specify impractical designs. A "programmer" that can't understand the
: needs of the users will often build unusable programs. The best systems
: always come from people who make it their business to understand the total
: problem from the point of view of the user, the point of the view of the
: machine, and everything in between.

How about the formulation of an architecture to span the gap? This may or
may not require someone who's called an architect.

Elliott

Samuel S. Shuster

unread,
Dec 12, 1996, 3:00:00 AM12/12/96
to

Todd Hoff,

>OO is not supposed to be a specialized tool but a general
>methodology for designing and implementing software systems.

Maybe that's a problem in and of itself. Maybe OO is supposed to be a
specialized tool, but has been inappropriately hyped so much that too many
people think that it is supposed to be E-Z.

Let's look out to the side a moment. VisualBasic. Certainly it's one of those
"EveryMan" tools. But why does it fail so bad on large enterprise systems, and
in particular why does it fail so bad when the system requires large groups of
interacting subsystems and interaction between developers and testing and even
worse for maintenance?

I've got an opinion as to why. VisualBasic does not promote a disciplined
approach to development. I hold that in fact it promotes a cowboy attitude. In
order to get to the large system with all that goes with it with VisualBasic,
one not only has to diligently apply an external discipline, one has to fight
the tool in order to do so!

What OT (or any methodology does) is define a discipline. Is it a general
methodology? Yes. But a methodology none the less, and as such, demands that
discipline be used in order to see any benefit from it. Lip service doesn't do
it. Knowledge alone doesn't do it. Doing it, with rigor, is the only way.

So, if OT has failed in any way, it is in not stopping the hype that allows
people to perceive that OT is just some kind of E-Z solution to all their
problems.

Is OT a better discipline for developing large systems? I believe so, but I
don't believe that this is the debate here.

A better question is, has Structured/Procedural Technology failed? If we only
judge by looking in the context of how the majority of procedural
tools/languages are used, then in my opinion, Yes. It has failed miserably... in
my further opinion, it has failed worse than OT.

However, if we look and judge by when the rigor of the discipline of
Structured/Procedural technology is used, then I believe it has succeeded fairly
well.

Further, if we look and judge Object Technology in terms of a rigor &
discipline, I believe it to be successful also.

Finally, we come to the comparison. When we look and judge by rigor &
discipline, and then finally _add_ in effectiveness and then compare, then I
believe OT is comparatively more successful.

But to reiterate, this all depends, deeply, on the fact that OT isn't a
belief, isn't simply the understanding of three concepts (Encapsulation,
Inheritance and Polymorphism), isn't even simply the correct applying of these
and related concepts.

It is a discipline. It is a discipline like all other disciplines that in
order to be successful must be applied. Applied rigorously. In my opinion,
anything less is not Object Technology... It's the lip service of the self
anointed experts whom I wouldn't trust to design my cat's upchucked hair
balls... even from a fresh example.

So, what's wrong with OO? What's wrong is people who think they should be able
to see the structure of molecules with a High School Microscope, and are then so
overwhelmed when someone says "It takes a powerful electron microscope, bub, and
you'll have to learn how to use one, and apply some discipline in order to get
the results you need"

TANSTAAFL. The biggest problem facing the software community is the too
widespread belief that Object Technology is a free lunch.
And So It Goes
Sames

============================================================================
sshu...@parcplace.com
ParcPlace-Digitalk
Consultant
All opinions are my own.
============================================================================

Piercarlo Grandi

unread,
Dec 12, 1996, 3:00:00 AM12/12/96
to

>>> "nickle" == Nick Leaton <nic...@calfp.co.uk> writes:

nickle> Piercarlo Grandi wrote:

pcg> Perhaps the reverse: if the tools were really advanced, perhaps
pcg> including a program generator (and despite claims to the contrary
pcg> no such thing has been yet produced), then high level design
pcg> activity would be almost all the project.

nickle> Available now, called a programer.

rmartin> In another case, I have worked with a client who had a bunch
rmartin> of "architects" doing nothing but drawing pretty Booch
rmartin> diagrams and then throwing them over the wall to a bunch of
rmartin> programmers. The programmers hated the architects and
rmartin> ignored what they produced.

Unfortunately, no matter how intensely so many managers wishfully think
so (note that I am implying that you are a ``suit'' or that you
wishfully think so, just that such wishful thinking is common among
them), programmers are not "tools", and often not even "really advanced"
ones :-).

That programmers are not tools is indeed the reason which explains
Robert Martin's observation that tossing buble diagrams over the wall
does not work.

It is loading more messages.
0 new messages