Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Programming R&D or Production?

27 views
Skip to first unread message

Cliff C Heyer

unread,
Jun 16, 1990, 5:22:47 PM6/16/90
to
I am often finding myself in a situation where I'm asked to give
an estimate for programming time on some new an completely
different program that I have no track record to estimate by. When
my estimate turns out to be wrong, I am told "the building
industry does it, so can you."

Then I got thinking, "well do they really do it?"

I did some research, and came up with the following comparison:

(Please pardon my use of generalizations below, and attempt to get
to the essence of what I am saying. I don't have time and the net
does not have BW for me to transmit a book on this subject.)


Summary of Steps Needed To Create a New Building:
STEP DESCRIPTION

1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available supplies.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Mechanical and systems design on paper:
Per Diem Expense. Every nut, bolt, pipe, wire, screw is accounted
for.
Elevators, electrical systems, mechanical
systems, plumbing systems. Heart, pump,
valves, veins that make body tick.
An accounting is made for what each item is and
where it is used.
3. Construction Turn paper into reality on time and on budget
Fixed bid expense.

Summary of Steps Usually Used to Create a New Software Product:
STEP DESCRIPTION

1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available libraries.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Never done.

3. Construction Construction person(s) asked to make an
Fixed bid expense.estimate on the architectural rendering.
Major systems are sketched out,
but numerous details are overlooked
which cause the estimate to be far off.
The "engineering" is done during
"construction" at a fixed price,
with the "construction" person paying
the difference if the estimate is wrong.

Summary of Steps That Should Be Used To Create a New Software
Product:
STEP DESCRIPTION

1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available libraries.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Systems design. Designed by entering
Per Diem Expense. design (program) into a computer and
using it to make it work as per
architectural specifications.
IMPLEMENTATION OF EVERY FEATURE IS
SPECIFIED DOWN TO EACH LINE OF CODE -
This is the definition of "engineering".
If every line of code is not specified,
it is part of "Architecture" above.
Windowing system, data base engine
system, interrupt trapping system,
context sensitive help system, drop
down menu system.
3. Construction No construction phase with software,
process has been finished when the
engineering phase is complete. You
have a computer printout of the program
which is an "on paper" design.


What I found that when I am begin asked to give an estimate, it
would be the same as if you asked a construction company to give
an estimate with no engineering work. Obviously this is not done.

So why are software persons asked to do it?

And who says you can estimate this in the first place? If you do a
100% engineering of the software product, you have already
finished it as a per diem expense. There is no construction left
to do, as once you have it on paper that is all there is to be
done.

I argue that if you design a software program to the same level of
detail that a building is designed to, YOU WILL HAVE TO WRITE THE
SOFTWARE.

But in the software industry, this somehow is never done. People
are always asked to give estimates based on appearance: report
formats, screen/window layout, etc.

And therefore, every software "estimate" based on the architecture
phase is bound to be flawed, and persons making such estimates
should not be chastised for any errors. They are at best educated
guesses.

This process of making an estimate without any engineering makes
the project cost much more, because assumptions are mode without
any programming being done. Several layers of bad assumptions
cause the wrong programming to be done when the "construction"
phase finally comes around, and then the bad assumptions are
discovered and hundreds of man hours are thrown out. If during the
engineering phase the code was actually written (at per diem),
these bad assumptions would be discovered before they become
layered and the wrong code would never be written.

One building is like another because a door hinge is a door hinge.
Different programs have different algorithms that are NOT the
same. They require R&D to develop, and I resent this being pushed
into and being classified as production for the convenience of
management or for any other reason.

When you have to figure out HOW to make something work, it's R&D -
plain and simple. Either you find someone who knows in and charge
it as a production expense, or R&D it with a person who has to
figure out HOW. You don't take a person who does not know HOW to
do something and classify them as production, yet this is done in
software all the time.

Then they hold the old carrot out there, "well just do the best
you can and there will be a big bonus for you..."

Determining HOW is not an estimatable expense. How much of the
time do programmers spend figuring out HOW to make something work?
How much of this time is allocated to production budgets?

Software is primarily an R&D business, and if people can't take
the heat of that kitchen they should get out, not change the rules
by proclaiming themselves in the production business to placate
their stockholders who demand earnings, and then demand production
performance from people doing what essentially is R&D.

With a building, the entire design is specified in the Engineering
phase.

With software, the entire design usually isn't known until the
project is finished because nobody seems to have enough experience
or foresight with software to accurately establish the design.

Then the engineering phase is often not done at all. The architect
gives his incomplete rendering to a "construction" person and asks
him for an "estimate". After the "construction" phase the estimate
is found to be far off, and the "construction" person is chastised
for not doing a good estimate.

In construction, the construction party NEVER gives an estimate
based on an architectural rendering! This is impossible! Yet in
software, this happens regularly. Department managers have to
stick to their pre-set budgets, so they avoid the per-diem
engineering expense and throw this responsibility on the shoulders
of the "construction" person who has no engineering design on
which to base and estimate. And if he DID have such a design, his
services would not be needed because the product would have
already been completed in the engineering phase.

People often speak of the problems of the software industry and
the man-years backlog. The first step to solve this crisis is to
admit what programming really is, instead of trying to fit a
square peg into a round hole.

John R. Dudeck

unread,
Jun 17, 1990, 1:28:44 AM6/17/90
to

In article <30...@cup.portal.com> cliff...@cup.portal.com (Cliff C Heyer) writes:
>I am often finding myself in a situation where I'm asked to give
>an estimate for programming time on some new an completely
>different program that I have no track record to estimate by. When
>my estimate turns out to be wrong, I am told "the building
>industry does it, so can you."
>
>Then I got thinking, "well do they really do it?"
>

[Analogy of software development and the construction industry].

I think that this article points out a lot of what is at the heart of
the "software crisis", namely that software development has been looked
at by most as being similar to other forms of engineering, yet somehow
we just can't seem to get our act together and make it work.

I think there is one fallacy in the analogy, and that is that software
development is not finished when the "engineering" is done. For one
thing, the high-level and low-level designs of a product may be done
concretely, before the actual implementation is done. Also, once the
programs are written and working, the product is still far from finished,
since there will need to be further debugging, refinement, testing, and
packaging before it is ready to go to market. (I'm assuming a commercial
product here. An in-house system may not have this fine of treatment).

It seems that the industry is starting to finally get a handle on the
nature of software development, realizing that as Cliff points out,
software engineering isn't like other kinds of engineering.

The nature of software development is such that the if we can have many
rapid feedback cycles, the result is a product that more closely meets
the needs of the end user. Rapid prototyping is one step in this goal.

Another step is the realization that we need to use techniques that
lend themselves to changes in the requirements and to the software that
is the product. The use of concise specification methods rather than
voluminous documents is a big help here. The use of well-encapsulated
designs is also critical.

There have been some models for software estimation (COCOMO model)
that take into account the margin of error in the earlier stages
of the project. As the project moves into the low-level design
phases, it becomes more accurate.

It is clear that if the software industry continues to operate as it has
for the past 3 decades, the problems of software that is expensive,
unsatisfactory, and that takes a long time to develop will contine to
plague the world.

Object-oriented design and programming is an effort to make software
engieering be more like other kinds of industries, by creating software
components that are building blocks, and that can be reused without
having to re-design them for each project.


--
John Dudeck "I always ask them, How well do
jdu...@Polyslo.CalPoly.Edu you want it tested?"
ESL: 62013975 Tel: 805-545-9549 -- D. Stearns

joh...@p.cs.uiuc.edu

unread,
Jun 18, 1990, 11:48:00 AM6/18/90
to

I agree that software development is more like research than production.
It will become more so in the future. There are lots of people who
are doing true "production" programming now, i.e. building software
that has been built hundreds of times before, software that is well
understood and has (at least at some shops) well defined specifications.
However, these guys should be reusing software instead of rebuilding
it. After "the industrial revolution" we will start reusing software
instead of rebuilding it all the time. At that time a larger fraction
of our effort will go into building new things.

Designing software is more like creating a new antibiotic than it is
like building a bridge.

Ralph Johnson -- University of Illinois at Urbana-Champaign

Scott McGregor

unread,
Jun 18, 1990, 2:29:00 PM6/18/90
to


Cliff C Heyer makes many good points that I agree with. However, I believe
that his posting does not reflect some of the other economic and social
factors that lead to the sort of behavior he is discussing. In the past
I put together a posting on this subject for a more restricted audience, and
I'll see if I can dig up an old copy and re-post it. But I'll make some
summary remarks now.

I agree with Cliff that software developers do not tend to work designs
down to the same level of completeness that builders often do. However,
the complexity of the typical software design is usually much greater
than the typical construction project because constuction materials are more
standardized than software modules are. While Cliff suggests that software
designs be taken down to the line of code level, the analog of this is not
defining every bolt and where it goes, but perhaps describing the depth of
every groove at each point (external interface) and the precise molecule
by molecule composition of the material. This level of detail IS NOT
done on construction projects. Requiring it might reduce some structural
failure errors, but it would overburden the project with too much paperwork,
and testing (QA) time.

I believe that this is largely why such detail is not supported by
management on software projects. Moreover, because the engineering
costs are small
compared to the overall construction costs, people are willing to pay for
a "paper study" (i.e. the engineering design) before the construction
is started. It is okay to pay that small amount in order to get a
large amount of predictability in the more expensive later portions of the
project. However, in Cliff's suggested version there would be no later
expensive production phase, since the completed engineering phase goes
down to the level of a line of code and is completly runnable.
Unfortunately, that means that management has to totally committ to a
project even before the know what it will cost. That's unlikely to happen
in any group of risk-averse managers. At best, this might shift the
cost control and estimation into the architecture phase where it isn't
the production (now design phase) engineer's problem. But this just
shuffles the cost prediction problem to someone else it doesn't solve
it. For that reason, it may not be considered a win by management
responsible for the entire project.

The truth of the matter as to how building contractor's can predict
schedules better than software engineers is that they have a book
of expected times to do standard tasks. The tasks and materials
are pretty standardized and have been done by large numbers of
people so that reasonable maximum times have been observed. More
experienced contractors (i.e. 20+ years experience) often have their
own experienced judgements for deviation from the norms in the
standard contractor's book. They use this to further refine their
judgements. But new contractor's don't start at ground zero, because
they get to start with the book times. Also, note that the book times
tend to be reasonable MAXIMUM estimates, since the determining factor
is usually the quality and experience of the help, and contractors want
to be able to make money even if they have to work with a bunch of
apprentices. The contractors (and the laborers) can often make more
money by being able to beat the book times.

It is different in the software world. Most people are asked to make
an estimate without any "book" that they can consult. The tasks,
materials and skills of the people on the project can be more variable.
So new estimators don't do well. Experienced (20+ years) estimators tend to
make better estimates, but there aren't very many of them. And they still
have to deal with the greater variability of tasks, materials and workforce.

A last problem is that in software it is common to obscure the difference
between a schedule (i.e. a contract) and an estimate. A contract or schedule
needs to contain enough time (slack) to cover unfavorable time variances. A
contract or schedule should represent a worst case, but should be so
solid that other people can build their own commitments around it. This
rarely happens in software in this country, but it is very desirable.
An estimate on the other hand is how long you think it might take. An
estimate doesn't handle the worst case scenarios since you don't think
those are likely to happen. When they do, you change your estimate.
Estimates slip; schedules shouldn't--but when you confuse the two,
schedules do slip all the time and that is exactly what we see happening
around us. I've endeavored to constantly make a distinction between
schedules and estimates and this has required a lot of education of
my managers and my employees. But it has enabled my project teams
to keep an enviable track record on meeting their committments.
Unfortunately, I have found that many people not only turn in mean estimates
instead of schedules--they often turn in the "optimistic case" estimates
which are even more sure to lead to poor predictions. I've noticed that
people often turn in either the Cocomo mean times (typically) or the
earliest times. I've found that showing the huge variance between best
to worst case is effective in helping others to understand why estimates
are so different from schedules (or why schedules need to be so
conservative).

This leads to a final point. For many software projects time to market is
extremely important. Early entrants often make significantly more
money than late entrants. So there is the natural pressure (mentioned
above) to look for optimistic estimates. In contracting, the "slack"
is built in to the book estimates, and if you finish early you can
always find some way to add quality, or do special finishing touches
that will be desired by the customer. In software the early dollars
encourage us to eliminate the slack. But this is a mistake. Slack
is necessary because things don't ever go all right. Making investment
decisions on shaved schedules that won't be profitable if they go their
normal amount of time will drive a company out of business. Better is to
make a modest product that can be done quickly even given the slack time.
If it is profitable with the slack time, it can be even more profitable if
due to your clever people, and good management you get it done early.
But at least you won't go bankrupt on a small project slip

Scott McGregor
mcgr...@atherton.com

Rick Jones

unread,
Jun 20, 1990, 7:39:46 AM6/20/90
to
(Excuse the cross-posting of this follow-up to comp.object, but I think it's
very relevant to OO thinking at present)

This is something that's been troubling me recently, especially as I've been
looking at object-oriented design & methods. There is a lot of emphasis on the
concept of "construction", the OO idea being that you "construct" out of
"components". I just suspect we are trying to take the wrong analogy in the
wrong direction.

The conceptual problem with software is that you don't actually construct
anything. Even when the program is a compiled executable binary, it is still
abstract, it is still _soft_. It only becomes something reasonably concrete
when a computer executes it to make it do something. In other words, the
computer does the construction automatically, and always has done (that's why
they were invented!). OO terminology actually fits in with this idea - the
software consists of classes which invoke constructors when they are executed.
The objects are the things which are constructed, and this only occurs at
run-time.

I think it is more helpful to consider the _objects_ as the components, not the
classes. Thus a repository of components is an OODB, not a software library.

So what is the software? Think of it as analagous to a detailed engineering
drawing. It contains all the information required to create a component which
has a given set of properties. As a program, the information is in a form
which enables the computer to construct the specified component. Whether the
form of this program is source text, linkable oject, or executable doesn't
matter; these are just transformations of the same thing.

What is _not_ simple transformation is creating the detailed drawing from a
specification. In hardware engineering this is the job of a skilled draftsman,
in software it is the job of the programmer. In both cases the skill is the
interpolation of detail which is not explicit in the specification, but must be
described before the final component can be constructed. So our programmers
are really doing detailed design work, not manufacturing goods (we've always
known that really, haven't we :-).

As an aside, this implies something relevant to the CASE debate. My perception
of CASE is that it's ultimate goal is to be able to generate code automatically
from high-level designs. If the programmer really is working as a draftsman
and generating detailed design, then this is not actually achievable. Writing
code _requires_ human skill, and cannot be totally automated. (I anticipate
flames from CASE fans - not that there seem to be many - for that one!)

I feel that the real reason software engineering has so many problems compared
to its hardware counterpart is the nature of programming languages. Try
comparing the characteristics of source code to a detailed engineering drawing.
The most important property of the drawing is that it still embodies the
original design is a visible way. The designer, even if he is not a draftsman,
can look at the drawing and tell almost at a glance whether it's what he
designed. Try doing that with source code! Vast amounts of money are now
being spent by the computer industry trying to reverse-engineer old source in
order to extract the underlying design. An old set of engineering drawings
are a lot clearer about what they represent.

Having committed a possible heresy in saying that OOD is not building software
from components, I should say what I think it really is, especially as I think
it is still the best way forward. Since I've said that programming is really
detailed design, it follows that OOP is design re-use, not component re-use.
Try this comparison:

Engineering drawing Programming

hand-drawing each design straight-line code
stencils for standard shapes function libraries
CAD with component database OO with re-usable classes

The OO programmer builds high-level class designs by using existing lower-level
classes, sometimes adapting them (via inheritance) if they are not exactly what
is required. The engineer using a CAD system builds his drawings in very much
the same way. Interestingly, CAD systems are probably one of the first areas
to which OO methods have been widely applied.

I could continue ad nauseam on this theme, but I'll save some net BW by saying
that my conclusion from this analogy is that improvements in software
engineering will only come about if programming languages are perceived and
developed to represent both design and implementation, rather than just being
implementation formalisms. Using OOP, the code for a class must be able to
describe the intrinsic properties of the ADT it represents in such a way that
the implementation details are _guaranteed_ to conform. Run-time checking of
assertions, in the style of Eiffel, is a pragmatic but not altogether
watertight method. Algorithmic analysis by the compiler is a more elegant
concept but far more difficult to do (any comments from experts in this
field?).

Have I said something contentious? I hope so. I'd like to see comments from
anyone with real hardware or building engineering experience; I might just
have been talking a load of bull.

--
Rick Jones You gotta stand for something
Tetra Ltd. Maidenhead, Berks Or you'll fall for anything
ri...@tetrauk.uucp (...!ukc!tetrauk.uucp!rick) - John Cougar Mellencamp

George Williams

unread,
Jun 21, 1990, 8:55:58 AM6/21/90
to

I also agree that programming is more like R&D than production, but
how are we gonna convince our customers/managers of that after we've
been caving in to their demands for fixed price/schedule predictions
for all of this time?
--
George Williams
E-Mail: geo...@huntsai.boeing.com Phone: 205+461-2597 BTN: 861-2597
USMail: Boeing Computer Services, POBox 240002, JA-74, Huntsville AL 35824-6402
<< Disclaimer: Boeing is not responsible for my opinions, nor I for theirs. >>

Brad Cox

unread,
Jun 21, 1990, 1:14:53 PM6/21/90
to
In article <1021...@p.cs.uiuc.edu> joh...@p.cs.uiuc.edu writes:
>Designing software is more like creating a new antibiotic than it is
>like building a bridge.

Several other analogies I've seen used
Proving a theorem (C.A.R. Hoare)
Writing a novel (Apple lead programmer)
Manufacturing ('Software Factories').

The one that seems most apt, and achievable, to me is
Plumbing

To wit, plumbers seldom do production. Every situation is new and uniquely
different since plumbing is a build-to-order business like programming,
not a production business like automobile manufacturing. There are
appreciable requirements, specification, design, prototyping, and
implementation phases. And most significantly of all, there is a robust
commercial marketplace that both limits and enhances the plumber's
"creative process".

Furthermore, notice the balance of power in the producer/consumer relationship
between plumber and homeowner. To wit, if a plumber proposed to work as
programmers to, by inventing everything from first principles, the homeowner
is empowered (since plumbing is a concrete business that everyone has the
reasoning skills to understand and exercise some influence over) to take a
definite stand. Few managers and software consumers are empowered to do this
because software is so abstract that important decisions are invisible to
them.
--

Brad Cox; c...@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

Rich Neitzel

unread,
Jun 21, 1990, 12:48:01 PM6/21/90
to
>I feel that the real reason software engineering has so many problems compared
>to its hardware counterpart is the nature of programming languages. Try
>comparing the characteristics of source code to a detailed engineering
drawing.
>The most important property of the drawing is that it still embodies the
>original design is a visible way. The designer, even if he is not a
draftsman,
>can look at the drawing and tell almost at a glance whether it's what he
>designed. Try doing that with source code! Vast amounts of money are now
>being spent by the computer industry trying to reverse-engineer old source in
>order to extract the underlying design. An old set of engineering drawings
>are a lot clearer about what they represent.

Ah the old "engineers do things in a nice, neat formal manner" canard.
BS. All fields of engineering suffer from similar problems. Note for
instance the comment, "The designer...can...". Great but what about the
maintainer, are the draws obvious to him. I doubt it. In @95% of the
drawings I've seen there are modifcations that do not show on the
drawings. So one has to reverse-engineer what they are. Further, even
the original designer can only understand the drawings "at a glance" if
the project is simple. Fourty pages of detailed drawing with numerous
notes is not something you can scan quickly for correctness. BTW, having
worked closely with both mechanical and electrical engineers and being a
testing engineer myself, I can assure you that much of the work that
goes on in seat-of-the-pants effort. Sure we generate drawings before
building, but unless it's a routine "done a million times before"
project, those initial drawings are nothing more then a jump off point.
Many times the changes and iterations are not reflected in the drawings
until the product is complete and guess what - then it's more
reverse-engineer the drawings than updating them.

Oh, but WE don't do things like that, OUR engineers are true engineers;
you must have worked with shoddy firms. Oh really? Take a good look
around - got any boards where the documentation says "...to set jumper
J12, cut the trace between...". Any boards with green wires sprouting?
Ever install field upgrades that correct defects in the original design?
Ever have your car recalled due to a design flaw? What's the latest
count of NASA's lost launch vehicles?

Richard Neitzel th...@thor.atd.ucar.edu Torren med sitt skjegg
National Center For Atmospheric Research lokkar borni under sole-vegg
Box 3000 Boulder, CO 80307-3000 Gjo'i med sitt shinn
303-497-2057 jagar borni inn.

Christopher Lott

unread,
Jun 22, 1990, 8:03:06 AM6/22/90
to
In article <52...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
>The one [analogy for programming] that seems most apt, and achievable, to me is
> Plumbing

I think this is a useful view, because it stresses the use of "standard"
components (whatever standard means) and not starting first with iron ore,
but the problem is that a plumbers task is always fundamentally the same.
Carry in fresh, carry out used. The drainage angles may be goofy, the pressure
may have to be high, the building code may specify copper over iron, etc.,
but the task remains constant.

I much prefer the "writing a novel" analogy, because a novel may have
a variety of different purposes. To inform, to entertain, to shock, etc,
all are possible. Further, a novel is constructed much in the same
way software is - by magic :-) :-)

(ok, I know, that's heresy, couldn't resist.)

chris...

--------------
The three rules of plumbing (heard from an ex-plumber friend)
1. Sh*t runs downhill.
2. Never chew your fingernails.
3. Payday's on Friday.
--
Christopher Lott Dept of Comp Sci, Univ of Maryland, College Park, MD 20742
c...@cs.umd.edu 4122 AV Williams Bldg 301-454-8711 <standard disclaimers>

Peter Joseph Welter

unread,
Jun 22, 1990, 12:20:47 PM6/22/90
to
In article <5...@tetrauk.UUCP> ri...@tetrauk.UUCP (Rick Jones) writes:
>This is something that's been troubling me recently, especially as I've been
>looking at object-oriented design & methods. There is a lot of emphasis on the
>concept of "construction", the OO idea being that you "construct" out of
>"components". I just suspect we are trying to take the wrong analogy in the
>wrong direction.
>
>The conceptual problem with software is that you don't actually construct
>anything. Even when the program is a compiled executable binary, it is still
>abstract, it is still _soft_. It only becomes something reasonably concrete
>when a computer executes it to make it do something. In other words, the
>computer does the construction automatically, and always has done (that's why
>they were invented!). OO terminology actually fits in with this idea - the
>software consists of classes which invoke constructors when they are executed.
>The objects are the things which are constructed, and this only occurs at
>run-time.
>
>I think it is more helpful to consider the _objects_ as the components, not the
>classes. Thus a repository of components is an OODB, not a software library.
>
I agree with your characterization of the objects as components. I think that
there might be two domains of design; the class domain and the object domain.
The class domain defines a scope of _possible_ actions and relationships between
objects, and is hence not an entirely accurate description of the system's
workings. As you said, the classes also provide a toolbox from which to
build components (objects). Design in the object domain could provide a means
to directly specify relationships between objects, removing one level of
abstraction from the process.

As an example, take user interface design program that allows a user to
graphically build set of windows, and the window items that are contained
in them. Each time the user adds a window item and places it in a specified
window, a relationship between a window item and the window containing it
is established. I would argue that this is just as valid a method of
software design as the design of a class hierarchy, although for some kinds
of problems, it might be impractical.

Pete Welter
Apple Student Rep.
University of Wisconsin-Milwaukee
pe...@csd4.csd.uwm.edu

Warren Harrison

unread,
Jun 22, 1990, 12:26:20 PM6/22/90
to
In article <52...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
>In article <1021...@p.cs.uiuc.edu> joh...@p.cs.uiuc.edu writes:
>>Designing software is more like creating a new antibiotic than it is
>>like building a bridge.
>
>Several other analogies I've seen used
> Proving a theorem (C.A.R. Hoare)
> Writing a novel (Apple lead programmer)
> Manufacturing ('Software Factories').
>
>The one that seems most apt, and achievable, to me is
> Plumbing
>
Actually I've always thought the best analogy for software development
is movie production. It shares many similarities. (1) It involves
coordination of massive amounts of people and equipment (2) every
movie (well at least most) is different enough where you can't
mathematically predict how much it will cost or how long it will take
(3) many films are made on a fixed (and inadequate) budget - eg
"Rock & Roll High School" with the Ramons to name one (sorry, I couldn't
help it :-) (4) The cost of making the movie is high, the cost of production
(ie, making the copies) is low (5) many movies are topical, so the time
to market is important (6) there are many analouges between the roles -
script writer is like a software designer/ producer is like a product
manager/ director is like the project manager / actors are like programmers
(I like that part :-) / continuity is like QA

There is little maintenance with film, but sequels which borrow a great deal
of the original script as well as some film cuts might offer similar
challenges.

Any comments on this analogy?

Warren


==========================================================================
Warren Harrison war...@cs.pdx.edu
Department of Computer Science 503/725-3108
Portland State University

Cliff C Heyer

unread,
Jun 23, 1990, 7:14:02 PM6/23/90
to
=======================================================================
[The following written by Cliff Heyer, the person who made the original
posting in this collection.]

Re: John Dudeck
/the "software crisis", namely that software development has been looked
/at by most as being similar to other forms of engineering, yet somehow
/we just can't seem to get our act together and make it work.
But why should it work? In other forms of engineering, rigid time
estimates are not given. It's just that somehow programming was
somehow slipped into the production department, and when things
can't be done on schedule people get upset.

Programming will never be done on schedule, because you are not
assembling components when you program. You are DEVELOPING new
components and devising a way to assemble them. These components have
no previous history to compare to for an accurate estimate.

The auto industry makes cars every year, so they know about how
long it takes to design a car. Computer programs are not like cars;
each one has it's own unique purpose and needs and these cannot be
classified en masse.

/I think there is one fallacy in the analogy, and that is that software
/development is not finished when the "engineering" is done.
Sorry, I can't see where I implied this in my posting.
Engineering continues until the product is complete. There is no
construction phase, because the program IS the design, and IT
is what is being sold.

/ For one
/thing, the high-level and low-level designs of a product may be done
/concretely, before the actual implementation is done. Also, once the
/programs are written and working, the product is still far from finished,
/since there will need to be further debugging, refinement, testing, and
This is all part of the engineering phase.

A software house is really an engineering house. But somehow
it become thought of as a construction company. The drive is on
profits, profits, profits. So engineering is forgotten because it
is too much of an open-ended expense, and everything is forced
to be on a production schedule.

/It seems that the industry is starting to finally get a handle on the
/nature of software development, realizing that as Cliff points out,
/software engineering isn't like other kinds of engineering.
This is NOT the point I was making. You COMPLETELY missed my point.
Software engineering IS like other forms of engineering, but nobody
treats it that way. This is the problem.

The whole problem is that SOFTWARE HOUSES REFUSE TO TREAT
SOFTWARE LIKE ENGINEERING! They try to treat it like production
and manufacturing, and then they cry when deadlines are missed.

The point is that "deadlines" are inappropriate in engineering, because
the quality of the product suffers (defective cars, TVs, etc.) You don't run
an engineering firm with that mentality. You don't run a software house
with that mentality, but many do. An engineering firm with a profit only
motive will go out of business after all the structures it designs collapse.
Actually, this is what is happening with a lot of software houses these days.
/
/The nature of software development is such that the if we can have many
/rapid feedback cycles...Rapid prototyping ...changes in the requirements
/...concise specification methods... (COCOMO model)...becomes more accurate.
Good comments,
but they have nothing to do with my point. My point is that this process should
be classified as engineering. The estimates won't improve, because
engineering estimates are much more relaxed than production. But we
no longer will be able to criticize bad estimates, which will increase job
satisfaction and the general quality of software by making programmers
"want" to do better work.

/... the problems of software that is expensive,
/unsatisfactory, and that takes a long time to develop will continue to
/plague the world.
/Object-oriented design and programming is an effort to make software
/engineering be more like other kinds of industries, by creating software
/components that are building blocks, and that can be reused without
/having to re-design them for each project.
The process of fitting engineering into production continues. Now
they have devised "software components" to further facilitate this
process. To people who think they will get software "components" that meet
specifications, use a minimum of resources and do not require engineering,
GOOD LUCK.

Who is trying to treat software as engineering and see what benefits accrue?

I predict "software building blocks" to be a dismal failure, unless one is
happy with resource hog programs. There is too much tuning that a craftsman
can do to double and triple performance in certain situations. Adding "black boxes"
to eliminate this process will guarantee mediocre programs. That is, until we have
10000 MIPS on every desktop, then maybe "how well" software works will no
longer be an issue.
---------
Re: Scott McGregor
/ However,
/the complexity of the typical software design is usually much greater
/than the typical construction project because construction materials are more
/standardized than software modules are.
I talked to people who built a 50-floor high-rise. The design effort I saw
makes software efforts I've seen look pretty poor.
/.. line of code level... [same as] describing the depth of
/every groove at each point (external interface) and the precise molecule
/by molecule composition of the material.
Every item is accounted for in the building estimate, while every item
IS NOT accounted for in a software estimate. No wonder software
estimates are wrong so much of the time. But then as Scott points out,
you can't complete the whole program just to get the software
estimate. The software business is a tough one to be in.

/Moreover, because the engineering costs are small
/compared to the overall construction costs, people are willing to pay for
/a "paper study"
This is true. In a big project, the engineering cost is small compared to
MATERIALS. But the accuracy is good because all the components
are known quantities to estimate.

But software has NO materials! The design IS the product, so there is
no construction cost to overshadow the engineering costs.

Software is all engineering. You are never assembling components,
but are devising new components and developing ways to fit them together.
(glue logic, data structures, etc.) When you are done devising the components
you are finished, BECAUSE THAT IS WHAT YOU SELL! The only "production" cost
is version control and making disk copies.

/However, in Cliff's suggested version.....management has to totally
/commit to a project even before they know what it will cost.
/...[this] shuffles the cost prediction problem to someone else it doesn't solve
/it.
My point is that software is not the business to be in if you want the
accurate cost prediction that occurs in the construction industry.

I don't have a solution to this problem, and I don't think there is a solution,
just as there is no solution to accurately estimating engineering cost
in any other field. Is this respect, all my writing on this subject is
"negative" - that is, I am critical of what is being done and I have no
solution except to recommend an "engineering" classification. But seeing
how unsuccessful previous attempts have been at solving the prediction
problem, I think restructuring software as an engineering R&D department
can only have beneficial effects. I realize this is not compatible with the
interests of time-to-market folks, but no guts, no glory.

I would rather see software classified as engineering, with no construction
component, and have all software ventures classified as "engineering
studies". Such studies are not fixed-priced, and the accuracy is equal
to the amount of cash you pump in. If you want an accurate estimate, you
PAY FOR IT. Don't do it breaking the backs of programmers who make the
educated guesses, which is all that can be made unless to program is
coded and finished. Programmers are really getting a raw deal in the
current scenario. That's why so many of them change jobs and are
generally disillusioned with their career.

We would have a lot better software delivered on a more prompt schedule,
and have happier employees. People who feel they are part of something will
do much better than those passed over for promotion because of a bad estimate.
In the end, I believe such an approach would pay off.

/The truth of the matter as to how building contractor's can predict
/schedules better than software engineers is that they have a book
/of expected times to do standard tasks. The tasks and materials
/are pretty standardized and have been done by large numbers of
/people so that reasonable maximum times have been observed.
THIS IS THE CORE OF THE ISSUE!
I submit that this will NEVER occur in software because the components
are necessarily different for each program's unique needs. All
buildings are made of the same pieces. Why should we think that
programs in THOUSANDS of industries will ever share many of
the same components? I think the idea is poppycock.

/It is different in the software world. ..people ... make
/an estimate without any "book" [to] consult.
/ ... tasks...can be more variable.
Exactly what I've said.

/[distinguish] a schedule (i.e. a contract) and an estimate.
[discussion of inclusion of variances in estimates, scheduling, etc.]
Omission of these items surely contributes to erroneous estimates.

/For many software projects time to market is
/extremely important. Early entrants often make significantly more
/money than late entrants.
Try a small core of programmers who own enough stock in the company
that they will make a nice nest egg if the product makes it big. Treat
them as an R&D group rather than a production group with
conventional management. I think this will produce more software
of better quality in less time than conventional approaches. Financial
backers of the company won't like this approach though because
if one person leaves, the value of their investment will be diluted.

/ So there is the natural pressure (mentioned
/above) to look for optimistic estimates.
YES! I find I always have to match the estimate my manager
suggests, because if I don't I'm in trouble. Actually this
makes estimating easy, except that I am still responsible
when the estimate is wrong.

-----
Re: Ralph Johnson

/However, these guys should be reusing software instead of rebuilding
/it.
Try it sometime, it's not as easy as you think.
/After "the industrial revolution" we will start reusing software
/instead of rebuilding it all the time.
Hah. This will never happen, and I'll explain why. Take a date validation function
for example. Date validation is the same everywhere, so you acquire a C library
that does it. When you get it, you find it requires a ascii format that your data
does not have. You don't have the source to the "component", so you have
to restructure your date from (char)10/9/90 to (long int)19901009 for
the function, wasting CPU cycles. You get another library with a function that
accepts an argument that declares the syntax of the date. Now the function
has to internally check this flag for each data, wasting cycles and wasting
space for syntax conversion never used.

(OOP folks, how does OOP or CASE help this problem?)

Some people can't waste the CPU cycles, so they will code it themselves.
Others realize it takes much more money to fit other's components into
their program then developing their own components. Still others learn that
using other's components means that they have black boxes that cannot be
modified, limiting their future choices.

When the day comes that a company goes into business manufacturing
thousands of these items in every possible variation, then we may
have partial component-based software development. But now there
is no money in this because it takes more money to administer component
use than to hire someone to write the component. Also everyone is busy
making software for the retail market where the most customers are
rather than OEMing components.

I predict making software components will do nothing to solve the
problem of estimating software development time. There will still
be custom components to be make, and these will suffer the same
estimation problems.


Re: Discussion on OOP:
I have yet to see how OOP solves any of the problems regarding
estimating time for writing a program. Someone at some point has
to develop a sequence of steps that make each "object" work. The
subjects I'm discussing aren't even relevant to CAD because using
CAD isn't writing a program - CAD is a higher-level abstraction using
an existing set of modules already written.

Re: Plumbing & movie analogies:
These are new ones to me and interesting! But in my posting I'm trying
to stick to basic principles to figure out the "truth" rather than make
comparisons, which can be misleading.

Cliff Heyer
cliff...@cup.portal.com
=======================================================================

Brad Cox

unread,
Jun 23, 1990, 12:17:50 PM6/23/90
to
In article <5...@tetrauk.UUCP> ri...@tetrauk.UUCP (Rick Jones) writes:
>This is something that's been troubling me recently, especially as I've been
>looking at object-oriented design & methods. There is a lot of emphasis on the
>concept of "construction", the OO idea being that you "construct" out of
>"components". I just suspect we are trying to take the wrong analogy in the
>wrong direction.

Imagine the army (software engineering) and the navy (computer science) arguing
about the nature of swamp. The army says its a lot like the land (concrete),
and the navy says its like sea (abstract). Both are partially right, and
both are partially wrong, because the swamp is too thick to drink but too
thin to plow.

Software is a swamp that is too thin for engineering analogies and too thick
for mathematical ones. Instantiation, inheritance, and static binding
are part of software's mathematical heritage, whereas reuse, encapsulation,
and dynamic binding are from its engineering heritage. Whichever analogy you
choose as 'right' is sure to be wrong somewhere, just as land and sea are
only first approximations to swamp.

The fact that OO brought all of these disparate elements together, even though
the mix about as well as oil and water, is part of the reason it suddenly
became so popular. Both army and navy could find something there to like.
But the fact that the different factions see totally different things to
like there is also responsible for everybody else's confusion about what
OO really *means*.

John R. Dudeck

unread,
Jun 24, 1990, 2:51:11 AM6/24/90
to

cliff...@cup.portal.com (Cliff C Heyer) writes:

>Re: Ralph Johnson
>
>/However, these guys should be reusing software instead of rebuilding
>/it.
>Try it sometime, it's not as easy as you think.
>/After "the industrial revolution" we will start reusing software
>/instead of rebuilding it all the time.
>Hah. This will never happen, and I'll explain why. Take a date validation function
>for example. Date validation is the same everywhere, so you acquire a C library
>that does it. When you get it, you find it requires a ascii format that your data
>does not have. You don't have the source to the "component", so you have
>to restructure your date from (char)10/9/90 to (long int)19901009 for
>the function, wasting CPU cycles. You get another library with a function that
>accepts an argument that declares the syntax of the date. Now the function
>has to internally check this flag for each data, wasting cycles and wasting
>space for syntax conversion never used.

Here is where Object-oriented design comes in. Instead of a date validation
function, what you would have is an object called date, and one of the things
that it can do is maintain its validity. You don't have to have a date
validation function, because a date object can never be anything but a date.

As an application programmer, you don't know or care what form the date
is represented in. All you know is that there is an instance of a date.
You can ask it to display itself, pass itself to you for you to use in
your program, etc.

>Some people can't waste the CPU cycles, so they will code it themselves.
>Others realize it takes much more money to fit other's components into
>their program then developing their own components. Still others learn that
>using other's components means that they have black boxes that cannot be
>modified, limiting their future choices.
>
>When the day comes that a company goes into business manufacturing
>thousands of these items in every possible variation, then we may
>have partial component-based software development. But now there
>is no money in this because it takes more money to administer component
>use than to hire someone to write the component. Also everyone is busy
>making software for the retail market where the most customers are
>rather than OEMing components.

I believe that there will be a move to commercial component libraries,
but they will be object-oriented and not function-oriented.

>I predict making software components will do nothing to solve the
>problem of estimating software development time. There will still
>be custom components to be make, and these will suffer the same
>estimation problems.

Well, at least the problem will be simplified. Think of what happens
in electronic circuit design. The design engineer has databooks full
of chips and other components that he can draw upon. At some point, it
may be cost-effective to build an application-specific chip. But how
does he do this? He doesn't have to design his chip at the transistor
level, because he has a CAE system that allows him to assemble pre-designed
logic cells, which perform higher-level functions and have specified
interfaces.

In the future, I believe a large percentage of software development will
be done by assembly from pre-developed class libraries. And the parts
that are not already in a library can be built up from sub-components
and added to the library for next time.

>I have yet to see how OOP solves any of the problems regarding
>estimating time for writing a program. Someone at some point has
>to develop a sequence of steps that make each "object" work. The
>subjects I'm discussing aren't even relevant to CAD because using
>CAD isn't writing a program - CAD is a higher-level abstraction using
>an existing set of modules already written.

OOP at least improves the situation to the extent that the object classes
that are already written don't require any development time in order to
reuse them. The issue is not whether it helps you estimate the time needed
to develop the parts that have to be done from scratch...just that you
don't have as much to do from scratch compared to a non-object-oriented
project. The effort spent on developing well-designed class libraries
is appreciable, but it is all the more appreciated when you don't have
to re-expend it on every project.

I have been one who has talked about the crasftsman mentality, how that
if something is worth doing it is worth doing well. But that does not
mean that every time I go to write a program I will write every line of
code from scratch. If there is a way that I can use well-made components,
so much the better. Then I can spend my energy working on the real design
issues of the project.

Brad Cox

unread,
Jun 23, 1990, 10:59:26 PM6/23/90
to
In article <25...@mimsy.umd.edu> c...@tove.cs.umd.edu (Christopher Lott) writes:
>In article <52...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
>>The one [analogy for programming] that seems most apt, and achievable, to me is
>> Plumbing
>
>I think this is a useful view, because it stresses the use of "standard"
>components (whatever standard means) and not starting first with iron ore,
>but the problem is that a plumbers task is always fundamentally the same.
>Carry in fresh, carry out used. The drainage angles may be goofy, the pressure
>may have to be high, the building code may specify copper over iron, etc.,
>but the task remains constant.
>
>I much prefer the "writing a novel" analogy, because a novel may have
>a variety of different purposes. To inform, to entertain, to shock, etc,
>all are possible. Further, a novel is constructed much in the same
>way software is - by magic :-) :-)

I agree that programming as the term is usually defined today, as a
gate (expression) or block (subroutine) level activity, is certainly
more like novel-writing than plumbing.

I was referring to when chip (instance/message), card (task/stream)
and rack (process/file) level reusability comes fully online.
Programming at these higher levels of abstraction, particularly at
the card and rack level, should be *precisely* like plumbing.
In hardware engineering today, notice that those who work with card-level
modularity would agree that their task is "fundamentally always the same".
And that's precisely what they want, and why they choose to stay at
that level of modularity. People decide to work at higher levels
precisely because they *want* it to be always the same; no muss,
no fuss, plug it in and it *works*.

Note that the rack-level programmers of today (Unix shell) even speak
of their modularity/binding technology as pipes and filters.

Cliff C Heyer

unread,
Jun 24, 1990, 1:10:57 PM6/24/90
to
John Dudeck writes...

>>/After "the industrial revolution" we will start reusing software
>>/instead of rebuilding it all the time.
>>Hah. This will never happen, and I'll explain why. Take a date validation function
>>for example. Date validation is the same everywhere, so you acquire a C library
>>that does it. When you get it, you find it requires a ascii format that your data
>>does not have. You don't have the source to the "component", so you have
>>to restructure your date from (char)10/9/90 to (long int)19901009 for
>>the function, wasting CPU cycles. You get another library with a function that
>>accepts an argument that declares the syntax of the date. Now the function
>>has to internally check this flag for each data, wasting cycles and wasting
>>space for syntax conversion never used.
>
>Here is where Object-oriented design comes in. Instead of a date validation
>function, what you would have is an object called date, and one of the things
>that it can do is maintain its validity. You don't have to have a date
>validation function, because a date object can never be anything but a date.
I know, but this does not solve the above problem. YOU STILL HAVE TO PUT
DATA IN THE OBJECT! YOU *MUST* VALIDATE IT IF IT COMES FROM AN
OUTSIDE SOURCE!

This is a perfect example of why scientists get bashed for not being
practical.

You are interfacing with any number of applications and hardware platforms.
They DO NOT have a standard way to represent a date.
Here is a list: 19900101, 1/1/90, 01/01/90, 01/01/1990,1/1/1990,
Jan 1, 1990, January 1 1990, January 1, 1990, Jan-01-90, JAN-1-90,
90.1.1, 900101,0110100101110101000101001010101 (binary UDT
(universal date & time), BCD (binary coded decimal - do you know what
that is?), plus dozens more.

I submit OOP to have NO BENEFIT in terms of development time to the writing
of a program to reduce the above to a common date. The resulting
code may be easier to maintain, and more reusable. But the real problem
today is time estimation; that is what is bankrupting companies and
departments.

These programs are a pain in the a** to write, and they will always be no
mater what new programming fad come along.

>As an application programmer, you don't know or care what form the date
>is represented in.

Most programs interface with the external world, not just an internal one.
What you speak of is fine for engineering an encapsulated product. Even so,
YOU STILL HAVE TO SELECT A DATE FORMAT FOR SCREEN DISPLAY, so
your "object" must be able to display it. Maybe the boss wants 10/9/90
but the object was written in Fiji and has the 90.10.9 format.
Somebody has to make that change, it's not done by magic.

>You can ask it to display itself, pass itself to you for you to use in
>your program, etc.

This is nice, I'm not trying to argue with this.

>>I predict making software components [OOP] will do nothing to solve the


>>problem of estimating software development time.

>Well, at least the problem will be simplified.

But nothing to change the fundamental problem: When you are programming
you are not assembling pre-engineered components. You are developing
an new sequence of steps to perform a task. This is not and will never be
accurately quantifiable. (I'm just waiting for some manager to give me
his logic why this is wrong...and post it)

All OOP does is move the problem up to a higher level of abstraction.
Instead of having the problem with individual lines of C code, you now have
the problem with a sequence of "components."

A computer does things sequentially, that is that. At some point you have to get
down and dirty and specify the sequential steps. If you hide them inside
"objects" or "foobars" it does not matter, the sequential steps still have
to be there, and someone has to estimate them and program them.

I think focusing on making the workplace a happier place by looking at
psychological issues, etc. would prove more fruitful for productivity than
OOP ever will. But participation in the psychology USENET newsgroup is
at an all-time low.

>Think of what happens
>in electronic circuit design. The design engineer has databooks full
>of chips and other components that he can draw upon. At some point, it
>may be cost-effective to build an application-specific chip. But how
>does he do this? He doesn't have to design his chip at the transistor
>level, because he has a CAE system that allows him to assemble pre-designed
>logic cells, which perform higher-level functions and have specified
>interfaces.

But the scenario you describe has far more limited choices than
does a software programmer. You have an infinite number of different
functions and arguments you could devise to perform a task. You have to
do research to find the right ones.

With the electronic circuit, you have a finite number of components to
select from to do the job, because you can't "build your own" on the fly
like you can with software.

In addition, unless you have a microprocessor, you have no algorithms
running inside the chips.

With objects, you may have to tune that internal algorithm for a specific
task to save CPU time. Oh, but wait - you can't do this because this
would violate the fundamental principle of objects. Why should you not be
able to take advantage of a flexibility that software affords over hardware???


>But that does not
>mean that every time I go to write a program I will write every line of
>code from scratch. If there is a way that I can use well-made components,
>so much the better. Then I can spend my energy working on the real design
>issues of the project.

I agree, but my experience with the "real world" makes me suggest this
to be impractical.

Cliff Heyer
===========================================================================

John R. Dudeck

unread,
Jun 24, 1990, 8:08:51 PM6/24/90
to

I have jumped into this discussion because I have just finished taking a
course in software engineering, and in the course we spent some time talking
about just these topics. I hope Cliff doesn't think I am flaming him :^)

cliff...@cup.portal.com (Cliff C Heyer) writes:

>>Here is where Object-oriented design comes in. Instead of a date validation
>>function, what you would have is an object called date, and one of the things
>>that it can do is maintain its validity. You don't have to have a date
>>validation function, because a date object can never be anything but a date.
>I know, but this does not solve the above problem. YOU STILL HAVE TO PUT
>DATA IN THE OBJECT! YOU *MUST* VALIDATE IT IF IT COMES FROM AN
>OUTSIDE SOURCE!

Ok, I admit that when you are interfacing to the outside world, you still
need to validate your data.

Part of the issue here is to what degree your outside world is outside the
domain of the classes that you have to work with. The goal is to have more
of the problem domain already available as components, so that you don't
have the situation where your data is coming from outside the system.

>I submit OOP to have NO BENEFIT in terms of development time to the writing
>of a program to reduce the above to a common date.

Well, of course if you have to write this function, you have to. The
goal would be to not have to.

>The resulting
>code may be easier to maintain, and more reusable. But the real problem
>today is time estimation; that is what is bankrupting companies and
>departments.
>

>But nothing to change the fundamental problem: When you are programming
>you are not assembling pre-engineered components. You are developing
>an new sequence of steps to perform a task. This is not and will never be
>accurately quantifiable. (I'm just waiting for some manager to give me
>his logic why this is wrong...and post it)

I would like to see some managerial-level input on this topic, too!

>All OOP does is move the problem up to a higher level of abstraction.
>Instead of having the problem with individual lines of C code, you now have
>the problem with a sequence of "components."

Yes, it DOES move problems to a higher level of abstraction!!!! That is
exactly the point. Of course you can say that C is nothing more than a
higher level abstraction of assembly language. But in going to a higher
level, it also becomes more powerful.

>A computer does things sequentially, that is that. At some point you have to get
>down and dirty and specify the sequential steps. If you hide them inside
>"objects" or "foobars" it does not matter, the sequential steps still have
>to be there, and someone has to estimate them and program them.

A lot of what is gained in OOD is in the de-sequentializing of our problems.
Objects can be considered as independently executing processes. Because of
the extreme decoupling between objects, the problem is broken down into
more manageable pieces. This seems to me that it should make the problem
of time estimating easier. I'm not experienced in this yet. All projects
that I have worked on were not object-oriented designs.


>
>I think focusing on making the workplace a happier place by looking at
>psychological issues, etc. would prove more fruitful for productivity than
>OOP ever will. But participation in the psychology USENET newsgroup is
>at an all-time low.

Well, to some extent at least, the shift from functional design to
object-oriented design is mainly a psychological one... for the
programmer, that is :^)

>With objects, you may have to tune that internal algorithm for a specific
>task to save CPU time. Oh, but wait - you can't do this because this
>would violate the fundamental principle of objects. Why should you not be
>able to take advantage of a flexibility that software affords over hardware???

The idea is that you shouldn't have to be concerned unless you want to be,
about such aspects as algorithm fine-tuning. Certainly I wouldn't advocate
that algorithms shouldn't be fine-tuned to an application. I would say
that if you can show that an algorithm used in an object in your application
is the cause of performance that does not meet your requirements, then you
should find the sourcecode of the object and fine tune it, or else otherwise
come to a solution.

>>But that does not
>>mean that every time I go to write a program I will write every line of
>>code from scratch. If there is a way that I can use well-made components,
>>so much the better. Then I can spend my energy working on the real design
>>issues of the project.
>I agree, but my experience with the "real world" makes me suggest this
>to be impractical.

You're probably right for now. Whether or not this continues to be
impractical in the future will depend on how well the technology
catches on.

Brad Cox

unread,
Jun 24, 1990, 5:55:49 PM6/24/90
to
In article <268462...@petunia.CalPoly.EDU> jdu...@polyslo.CalPoly.EDU (John R. Dudeck) writes:
>cliff...@cup.portal.com (Cliff C Heyer) writes:
>>Re: Ralph Johnson
>>>After "the industrial revolution" we will start reusing software
>>>instead of rebuilding it all the time.
>>
>>Hah. This will never happen, and I'll explain why. (explanation deleted).

I've been urgently searching for a historical source to back up my speculation
that the cottage-industry gunsmiths, upon hearing of this new-fangled Armory
Practice, would have sounded a lot like us. "Goodness gracious, think
of the trouble and expense it will take to make interchangeable parts. It
will never work. It will be too expensive".

The only historical datum I've found thus far to back this up is that
armory and congressional records showed that really *was* far more
expensive to build guns the new way. But only the producers seemed
to care; the consumers didn't really care (much) because their priority
had become easier repairability.

Is anyone aware of data to support, or shoot down, this speculation? Data
for either guns *or* software would do nicely.

Carl Klapper

unread,
Jun 25, 1990, 2:34:06 PM6/25/90
to
In article <31...@cup.portal.com>, cliff...@cup.portal.com (Cliff C Heyer) writes:
> Programming will never be done on schedule, because you are not
> assembling components when you program. You are DEVELOPING new
> components and devising a way to assemble them. These components have
> no previous history to compare to for an accurate estimate.

It's worse. Programming frequently requires using unreliable and unverified
components which are replaced just as soon as you figure out fixes (if you
have the source) and workarounds. The new (outside) components are just as
unreliable and unverified, but require different workarounds and fixes,
as well as a different set of primitives. The original schedule was created
under the assumption of a solid foundation of hardware and subcomponents.
The reality is that one is building on shifting sands.

I highly doubt that any software can be built on schedule until both hardware
and software are verified to conform to specifications of appreciable vintage.
Unless, of course, the schedule allows for an indefinite number of periods
of indefinite length to be spent on porting.

+-----------------------------+--------------------------------------------+
| | Carl Klapper |
| | Odyssey Research Associates, Inc. |
| Verify, then trust. | 301A Harris B. Dates Drive |
| | Ithaca, NY 14850 |
| | (607) 277-2020 |
| | klapper%orava...@cu-arpa.cs.cornell.edu |
+-----------------------------+--------------------------------------------+

Scott Henninger

unread,
Jun 25, 1990, 3:22:34 PM6/25/90
to
I think this issue can be articulated in summary without taking up a few
hundred lines of text (sorry Cliff!). What's really being said here is
that the complexity of software engineering does not lie in programming
languages or in methodologies (there are problems, but they are not the
essential problems) - it lies in the *domains* we try to apply computers
to. And problems with standardization, components engineering, and
uncertain requirements occur because we apply software to many domains
that aren't well understood yet.

Notice that a subtle distinction occurs here. There is a difference
between how a field is traditionally practiced and how computers affect
this tradition. When I say that we apply software to ill-understood
domains, I am saying that computers change a field such that we don't
understand it as well anymore. For example, accounting practices have
been fairly well known, but applying computers to accounting poses many
new problems and opportunities. The spreadsheet has both alleviated
many problems and changed accounting practices. Of course, there are
also fields that were never conceived of before computers - like
programming (a case of the tail wagging the dog?).

What all of this means for this discussion is that we won't have a
software engineering methodology as scientific as bridge building until
we have the necessary scientific methods for each domain we wish to
apply computers to. And since the domains are ever evolving and
appearing, always pushing the limits of what we can do with the
computer, the production line vision of general computer applications is
nothing more than a dream.


-- Scott
sco...@boulder.colorado.edu

Kirk Kandt

unread,
Jun 25, 1990, 6:06:53 PM6/25/90
to
In article <15...@oravax.UUCP>, kla...@oravax.UUCP (Carl Klapper) writes:
|> In article <31...@cup.portal.com>, cliff...@cup.portal.com (Cliff
C Heyer) writes:
|> > Programming will never be done on schedule, because you are not
|> > assembling components when you program. You are DEVELOPING new
|> > components and devising a way to assemble them. These components have
|> > no previous history to compare to for an accurate estimate.
|>
|> It's worse. Programming frequently requires using unreliable and unverified
|> components which are replaced just as soon as you figure out fixes (if you
|> have the source) and workarounds. The new (outside) components are just as
|> unreliable and unverified, but require different workarounds and fixes,
|> as well as a different set of primitives. The original schedule was created
|> under the assumption of a solid foundation of hardware and subcomponents.
|> The reality is that one is building on shifting sands.
|>

There have been several studies that show that most tasks are highly
repetitive. Thus, most systems do not have to be newly developed; they
can be assembled from reuseable components. The issue comes down to
economics. What is the cost of cataloging (i.e., identifying the
important attributes) components to be reused, and later identifying
them for reuse with or without modification? Once this is known, then
the overall cost of reuse can be compared to the costs of new development.

Various mathematical software libraries are successful because
components are reliable and easily identifiable. This same success can
be had in other domains too! Someone just has to buy into the high
start-up costs in the belief that the long-term benefits are there.
Unfortunately, the procurement process of American society does not allow this.

Scott McGregor

unread,
Jun 25, 1990, 8:47:15 PM6/25/90
to
In article <30...@psueea.UUCP>, war...@eecs.cs.pdx.edu (Warren Harrison) writes:

> script writer is like a software designer/ producer is like a product
> manager/ director is like the project manager / actors are like programmers
> (I like that part :-) / continuity is like QA
>
> There is little maintenance with film, but sequels which borrow a great deal
> of the original script as well as some film cuts might offer similar
> challenges.
>
> Any comments on this analogy?

I also like the analogy to movie production and use it frequently myself.
However, I believe that we are doing software the way movies were done
before Griffith. I do not think that the director role is like the
project manager--I think that it is like the designer, particularly
the designer of the external interface. Programmers are not like actors,
Programmers are like camera men. In the days before Griffith it was
common that the director (i.e. the person with a vision about what the
audience would want to see) was the same person as the camera man. who
knew how to operate the camera and make the right lighting exposures. After
Griffith, these largely became separated and it became less necessary to
be a master technician of camera equipment in order to create a good movie.

In programming today, to a large degree with have not made this division
of labor between those people who want to be technically intimate with the
machines (i.e. the programmers) and those people who want to be intimately
knowledgable about what will appeal to their audiences (the designers).
We typically have only product managers who give overall design specs
(i.e "MS Word and WriteNow were box-office boffo! Lets do something
like that too!")." But detail control of the user interface interactions
is typically still done by those who are trying to squeeze efficiency out
of the machine. I believe that the state of the software interfaces also
is comparable in quality (in terms of meeting what users wanted) to the
pre-Griffith style of movie making art.

Note that I don't mean by this that programming is not a difficult and
valuable enterprise. On the contrary. I think it is quite valuable,
too valuable in fact to be left to diletante directors. I don't particularly
mean to celebrate directing and user interface design over and above
programming. I think both are important and that the best movies are
the result of good camera techniques, cinematography AND direction, just
as great software usually reflects both attention to the external
requirements and desires of the consumers AND to internal efficiency
concerns. Some people make better directors and some better camera
operators. I think similarly that some people make better designers
and some make better implementors. One test I sometimes use to
see where people fall on this spectrum is to ask where the most
enjoyment from programming comes from. Those who respond that they
like creating their own more logical world are often better in the
implementation arena. Those who say they are turned on by trying to
figure out what users really want when they don't explain themselves
eloquently are frequently more comfortable in design. At present,
in many jobs these aren't separated--for some people and products
I think we would see improvements if such a separation occurred.


Scott McGregor
mcgr...@atherton.com

Scott McGregor

unread,
Jun 25, 1990, 10:44:21 PM6/25/90
to
In article <31...@cup.portal.com>, cliff...@cup.portal.com (Cliff C
Heyer) writes:

> The whole problem is that SOFTWARE HOUSES REFUSE TO TREAT
> SOFTWARE LIKE ENGINEERING! They try to treat it like production
> and manufacturing, and then they cry when deadlines are missed.

While I agree that software houses refuse to treat software
like engineering, I do not believe that this is why they cry
when deadlines are missed. Even other engineering businesses have
deadlines and schedules. Accuracy of prediction varies across
disciplines and even companies. But people still have them.
Companies "cry" when people miss deadlines because the economic
consequences of missing them can be threatening to either the
individuals or to the organization as a whole. This is as true
in engineering as in manufacturing and is a psychological and economic
factor, not a scientific or engineering factor.

> The point is that "deadlines" are inappropriate in engineering, because
> the quality of the product suffers (defective cars, TVs, etc.) You don't run
> an engineering firm with that mentality. You don't run a software house
> with that mentality, but many do. An engineering firm with a profit only
> motive will go out of business after all the structures it designs collapse.
> Actually, this is what is happening with a lot of software houses
these days.

Actually, since much of engineering IS about tradeoffs it is entirely
appropriate to consider deadlines in development. Building an elegant
product to 80% completion and then abandoning it due to lack of money,
or because another company has already nailed down the market is not
good just because the quality of the implementation that was completed
was high. Success comes only if you survive long enough to deliver
your products. Money, manpower and time not being unlimited, you have
to meet some constraints to ensure success. Within those constraints
it is entirely appropriate to concern oneself about product quality.
I also don't doubt that some people paint the constraints as being
tighter than they really are, thereby making inappropriate decisions
about product quality. But that doesn't mean that one should
abandon managing to constraints--it just means you should get more
realistic managers. I am loath to make any blanket claims about
any class, but in general, many engineering firms go out of business
not from too much attention to cash flow and profits, but from too little
attention to short term finances to ensure their day to day survival
long enough to reach the long term.

> I predict "software building blocks" to be a dismal failure, unless one is
> happy with resource hog programs. There is too much tuning that a craftsman
> can do to double and triple performance in certain situations. Adding
"black boxes"
> to eliminate this process will guarantee mediocre programs. That is,
until we have
> 10000 MIPS on every desktop, then maybe "how well" software works will no
> longer be an issue.

Note that this is already happening. Some machines are getting fast
enough that people are accepting resource hog programs that are easy
to build and maintain even though more efficient systems are possible.
The most obvious cases are 4GLs and spreadsheets that are coded by
non-programmers. Higher efficiency programs (in terms of transactions
per second, etc.) are possible by trained programmers in languages
like assembler and C. But these programs are already "fast enough" for
their users, and programmers are not consulted.

Additionally, more and more programs are used "off the shelf". Once
upon a time almost every company's payroll, Accounts Payable and
Accounts Receivables were done with specialized programs suited
especially for that company. While this is still true for many
large or older companies, most small or new companies now either
purchase canned software packages of this sort or subscribe to banking
services that use one set of these programs for all their clients.

> /.. line of code level... [same as] describing the depth of
> /every groove at each point (external interface) and the precise molecule
> /by molecule composition of the material.
> Every item is accounted for in the building estimate, while every item
> IS NOT accounted for in a software estimate. No wonder software
> estimates are wrong so much of the time. But then as Scott points out,
> you can't complete the whole program just to get the software
> estimate. The software business is a tough one to be in.

My point is that a succinct description is sufficient to describe
an item in a building estimate, but that without standardized components
this is not possible in software. If the precise threading depths and
widths and nut and bolt lenghts were not standardized, you would have
to write a paragraph or two about each one used in the building
describing not only these depths and widths and lengths but the
tensile strengths of the materials, their thermal properties, etc.
If this was done for building, instead of relying on standardized components
for which these paragraphs are already written and well-known then
detailed design costs would dominate construction costs too, and
probably fewer buildings would be done with so many plans (in fact,
back when timbers were custom cut and nails were hand made there was
less design plans done. Even huge cathedrals were often designed "on
the fly" over years and years with different parts of the buildings
done differently by different craftsmen.

Now some building projects have really immense planning phases that
dwarf the complexity of some software projects. However, for those
complex building projects we find that the rest of the costs of the
project are giant too. I'll bet that if you compare like size building
projects to like size software projects (say like being in total man
hours to completion of the entire project) you'll find that software
design phase consumes a larger amount of the relative budget than
the building project does. Increasing this to even larger amount
spent on the project to ensure even more precision is often uneconomic
for software.

> /Moreover, because the engineering costs are small
> /compared to the overall construction costs, people are willing to pay for
> /a "paper study"
> This is true. In a big project, the engineering cost is small compared to
> MATERIALS. But the accuracy is good because all the components
> are known quantities to estimate.

Yes. Now if you are a little less specific, you'll save more on the
up front engineering cost, but at increased variability of the production
phase. At some point you reach an individual's or organization's balance
for predictability vs. cost (the flip side of risk vs. reward). That's
what determines what the actual tradeoff is. Unfortunately, largely
because of the lack of software components, the specification process
is very expensive. So many people trade that off considerable design
specificity to achieve lower costs. Granted that comes at some cost
in predictability as well. People would like both increased predictability
and lower cost. But sometimes the cost factor is the gating factor
since if it costs more you might go out of business or at least
cancel that investment. In general in the software business today
these tradeoffs tend to be toward lower up front costs at the cost
of poor predictability. Apparently this common tradeoff is a reasonable
one, since companies that do more complete specs are not by and large
dominating the industries profits yet.

> /However, in Cliff's suggested version.....management has to totally
> /commit to a project even before they know what it will cost.
> /...[this] shuffles the cost prediction problem to someone else it
doesn't solve
> /it.
> My point is that software is not the business to be in if you want the
> accurate cost prediction that occurs in the construction industry.

I agree. And I don't think that people are in it for that reason. I
think that they are in it because there is some money to be made there
and because they think they have the special skills to be successful
there. I believe this is true at both the macro and micro levels.
Even so, I know that people have different tolerance levels for accuracy
and predictability. So within any given organization there is some
tension about the precise level of tradeoffs. It is by no means always
this way, but I have frequently found that many of the programmers have
a lower tolerance for ambiguity than their managers. So while a manager
WANTS the same low level of unpredicitability as their engineers, and they
may REWARD the more predictable or favorable outcomes, they will be less
tense personally about possible variance in the predictions than the
engineers. Now this reward structure may tend to aggravate the self-induced
tension that engineers have about lack of precision in their estimates, and
I suspect that this psychological fact may be the real root of this discussion.

> I would rather see software classified as engineering, with no construction
> component, and have all software ventures classified as "engineering
> studies". Such studies are not fixed-priced, and the accuracy is equal
> to the amount of cash you pump in. If you want an accurate estimate, you
> PAY FOR IT. Don't do it breaking the backs of programmers who make the
> educated guesses, which is all that can be made unless to program is
> coded and finished. Programmers are really getting a raw deal in the
> current scenario. That's why so many of them change jobs and are
> generally disillusioned with their career.

I truly believe that no semantic change in whether we classify the work
as engineering or construction will make a difference in this matter.
As I noted above. The desire for price control and predictability is
a basic economic need for survival of the individuals (managers) and
organizations in charge. It is also a basic psychological need of
the individuals for some sort of stability, purpose, and feeling of
control over their future. I do not believe that you can make these
psychological needs go away by a change of classification or terminology.

I do believe that Cliff is correct that if you want an accurate estimate you
have to pay for it. I believe that any competent manager who has done this
for a while will recognize that this is necessarily true. But some (and
I include myself) will tolerate some amount of unpredictability ("I just
want a 'ballpark' figure") in order to save some cost. We don't require
absolute accuracy--close is fine. Closer is better, as long as it doesn't
cost much. If it costs too much I settle for less accuracy. I try to
be humane about this, and let my people know that I understand the level
of variance implied (and I take care to consider that when I have to
make committments myself). However, I have frequently found that
my engineers are personally less willing accept the same level of
ambiguity as I am. As I say, I try to be humane about this, but
sometimes things do come down to personal differences, I hope that
my engineers in the past haven't felt ill-served by this--perahps they
will teply and tell me different. I do think that there is one other
exacerbating problem, and that is that the ability to predict seems
to be in part determined by years of experience. Some people have
had many years but don't seem to have been able to convert any of
that experience to good use, but there are a many experienced people
who give MUCH BETTER estimates and predictions than others. I rarely
find inexperienced people who seem to have a talent for good estimates
or predictions. This is unfortunate for our profession at present,
because so many of the line manager slots are filled with people with
only a few years of management experience and often less than ten
years programming experience. And so lots of engineers pay for this
poor experience of their management, leading to disillusionment, etc.

> /The truth of the matter as to how building contractor's can predict
> /schedules better than software engineers is that they have a book
> /of expected times to do standard tasks. The tasks and materials
> /are pretty standardized and have been done by large numbers of
> /people so that reasonable maximum times have been observed.
> THIS IS THE CORE OF THE ISSUE!
> I submit that this will NEVER occur in software because the components
> are necessarily different for each program's unique needs. All
> buildings are made of the same pieces. Why should we think that
> programs in THOUSANDS of industries will ever share many of
> the same components? I think the idea is poppycock.

Personally, I think that the jury is out on this one. There are still
some "custom craft" tasks that don't show up in contractor's books
and which are subject to more variation and risk. I think we will
always have some custom craft work as Cliff points out. But it is not
clear to me that we won't have a growth in components. In some ways
I think that we already have, but that you see the benefits most
strongly in the end-user systems and applications program areas.
I've already mentioned the re-use of accounting software, of spreadsheet
macros, of 4GL libraries by end users. We are also seeing standard
libraries like Xtk, Motif, even the unix programming libraries that
people are adopting and reusing where before they would have written
their own window systems, etc. before. Now I (re)used the Xtk widgets
a lot, and to the extent I did, I didn't have to worry about variance
estimating how long those functions would take to build. Now, there
was a case where I needed a widget that wasn't in Xtk, so I had to
build it from scratch. It was harder to estimate. That's life.

>
> /It is different in the software world. ..people ... make
> /an estimate without any "book" [to] consult.
> / ... tasks...can be more variable.
> Exactly what I've said.
>
> /[distinguish] a schedule (i.e. a contract) and an estimate.
> [discussion of inclusion of variances in estimates, scheduling, etc.]
> Omission of these items surely contributes to erroneous estimates.
>
> /For many software projects time to market is
> /extremely important. Early entrants often make significantly more
> /money than late entrants.
> Try a small core of programmers who own enough stock in the company
> that they will make a nice nest egg if the product makes it big. Treat
> them as an R&D group rather than a production group with
> conventional management. I think this will produce more software
> of better quality in less time than conventional approaches. Financial
> backers of the company won't like this approach though because
> if one person leaves, the value of their investment will be diluted.

I think that in many cases if you ran this experiment you would see
the benefits you claim. But I would attribute it to the fact that you
had selected a SMALL group and made the individuals more directly responsible.
Because of their economic incentive I would guess that some would take
a more personal interest in what the customer wanted too. And those things
can lead to more successful products. But if no one paid attention to
the costs I would not be surprised if several of these groups ran out
of money before completing their projects. And if the host organizations
ongoing success was dependent on these projects I would expect some of
the host organizations to collapse too.

> / So there is the natural pressure (mentioned
> /above) to look for optimistic estimates.
> YES! I find I always have to match the estimate my manager
> suggests, because if I don't I'm in trouble. Actually this
> makes estimating easy, except that I am still responsible
> when the estimate is wrong.

Well, I can understand that. But if you also say "no guts, no glory",
what does this imply about the need for "backbone" in setting estimates
and schedules that are reasonable? Doesn't "guts" mean taking an
unpopular stand sometimes in the hopes that long term rewards will
pay off?

> Re: Discussion on OOP:
> I have yet to see how OOP solves any of the problems regarding
> estimating time for writing a program. Someone at some point has
> to develop a sequence of steps that make each "object" work. The
> subjects I'm discussing aren't even relevant to CAD because using
> CAD isn't writing a program - CAD is a higher-level abstraction using
> an existing set of modules already written.
>

The way OOP helps is that it makes the components more standardized
and higher level. If you can live with the performance and flexibility
trade-offs, OOP means that you will have fewer primitives that you
will have to specify. Fewer, more standardized primitives means less
detailed and less costly designs relative to the final products. It
changes the cost of the paper study relative to the variability in
the final product, by reducing the cost of the paper study on a
per unit basis. This doesn't mean that specification isn't done,
it means that instead of having to specify a page of code you say
"displays file name in an XwstaticWidgetClass widget" and you are
just as unambiguous. That's less to write, less to read, less
possibilities for error in specification, which add up to a less
costly specification.

Scott McGregor
mcgr...@atherton.com

Scott McGregor

unread,
Jun 25, 1990, 10:53:19 PM6/25/90
to
In article <31...@cup.portal.com>, cliff...@cup.portal.com (Cliff C
Heyer) writes:

> But the scenario you describe has far more limited choices than
> does a software programmer. You have an infinite number of different
> functions and arguments you could devise to perform a task. You have to
> do research to find the right ones.
>
> With the electronic circuit, you have a finite number of components to
> select from to do the job, because you can't "build your own" on the fly
> like you can with software.

Ah, but you can, you can pay an ASIC shop to do this for you. The difference
is that the percieved cost in starting down this path to nonstandard
solutions is high. People usually do it intentionally for "proprietary
advantage" (my system is faster, more robust, more reliable....)
But in the eyes of many programmers the cost of starting down this
path is low, so they don't see the benefit. It may be that the
benefit is recognized at a different level, namely at the level
of the end user. To the end user, the ability to use and off the
shelf component instead of having to call a programmer and get
into the MIS backlog queue may be just the same level of cost
that people perceive in going to an ASIC. If true this has
considerable impact on the software industry at large, though
programmers may not feel affected for quite a while.

Scott McGregor

Eddy Sumardy

unread,
Jun 26, 1990, 10:04:38 AM6/26/90
to
It depends on which department you are referring at. In R&D department,
programming is
more to research oriented, some of the people here do a lot of
programming (prototype) and
few do much less, they usually are "the thinker".
In other department, e.g. product development, most do 100% programming.

-ed
P.S. A usual Disclaimer applies here.

Trip Martin

unread,
Jun 26, 1990, 11:57:22 AM6/26/90
to
c...@stpstn.UUCP (Brad Cox) writes:

In a course I took on history of american technology, we covered the
fledgling gun industry. When the idea of interchangeable parts was
first pursued, it was much more expensive to build guns that way, than
the old, because the precision machining required had to be done by
hand. It took the industry about 30 years (a remarkably short period
of time for the era) to develop the technology necessary to make
interchangeable parts on a large scale.

Actually, it was the consumer who demanded interchangeable parts for
guns. Eli Whitney convinced the bigwigs in Washington that it was
a worthwhile goal. He argued that being able to construct working
weapons out of a bunch of damaged guns was a valuable asset on a
battlefield. The gov't agreed and poured lots of money into achieving
that goal. The producers didn't really have a choice in the matter.
If they wanted the contracts, they had to join in.

One interesting note is the environment in which the technology
developed. At the time, the gov't required that contractors could
not hold back new technology from them. As a result, the national
armories were kept up to date on the latest techniques, and became
a focal point for development. Any gun company that wanted to find
out the latest breakthroughs only had to send a representative to
one of the national armories.

If people want references, I can dig them up.


--

Trip Martin
ni...@pawl.rpi.edu

William F Ogden

unread,
Jun 26, 1990, 5:49:42 PM6/26/90
to
In article <84...@jpl-devvax.JPL.NASA.GOV> ka...@AI-Cyclops.JPL.NASA.GOV writes:

...


>Various mathematical software libraries are successful because
>components are reliable and easily identifiable. This same success can
>be had in other domains too! Someone just has to buy into the high
>start-up costs in the belief that the long-term benefits are there.
>Unfortunately, the procurement process of American society does not allow this

It should be noted that mathematicians spent several hundred wall clock years
and countless man-years exploring the domains in which these libraries work.
This produced a relatively small yet powerful conceptual basis for these
domains together with an elegant notation for the concepts. The notation
was validated in innumerable applications, and the theory was elaborated
in great detail. Most importantly perhaps, the notation (and the most useful
part of the theory) was taught to most engineers and scientists. When
creating these libraries, programmers didn't have to discover the utility
of log, arctan, eigenvector, etc. Moreover, prospective users already
know the terminology used to identify and describe components in statistical
or linear algebra packages, for example.
The lack of a well developed and widely known formal basis for many other
computing domains may well prove to be a serious impediment to the development
of reusable software.
/Bill

Scott McGregor

unread,
Jun 26, 1990, 1:48:27 PM6/26/90
to
> There have been several studies that show that most tasks are highly
> repetitive. Thus, most systems do not have to be newly developed; they
> can be assembled from reuseable components. The issue comes down to
> economics. What is the cost of cataloging (i.e., identifying the
> important attributes) components to be reused, and later identifying
> them for reuse with or without modification? Once this is known, then
> the overall cost of reuse can be compared to the costs of new development.

For people who have done software development on one operating system,
and one language, at one company for a reasonable length of time, I think
it is obvious that people *DO* REUSE software. The software that they
reuse is the software that they have already written in the past. Sometimes
they can reuse who subroutines unchanged, but sometimes they grab blocks
of code and modify it a little. The problem is that people don't reuse
other people's code as easily. I do not believe that this is totally
due to economic reasons (though those clearly play a part). Rather I
believe that the most important reasons that reuse of other people's
code fails is that it goes against some of the psychological rewards
that programmers want. I actually did see a major reuse situation,
over ten years ago with Fortran libraries. The situation was totally
accidental in getting strarted and was extremely dependent on the
personality of the software librarian (who was a reference librarian
by training and interest, not a programmer), and some of the physical
features of the layout of the building where the programmers worked
and where the librarian worked. The librarian's personality was a factor
as well. Later, this job was overhauled and programmers given direct access
to the library and library tools and the level of reuse quickly fell back
to its original level. I've discussed this situation here before, so
I won't go into more depth unless there are requests, but the point is
not to overlook the power of personal motivations in situations concerning
reuse.

> Various mathematical software libraries are successful because
> components are reliable and easily identifiable. This same success can
> be had in other domains too! Someone just has to buy into the high
> start-up costs in the belief that the long-term benefits are there.
> Unfortunately, the procurement process of American society does not
allow > > > this.

Also, many of these libaries cover functions that are simple, easily
categorized, and well-known. A few sentances (often just the name)
are sufficient to tell what the function does. This means that the
cost of learning these libraries in terms of programmer's time and
interest is low compared to the cost of using less regularized
libraries with more special purpose routines.

The importance of human psychology on this problem is little appreciated,
but is greater than we may think. I believe that we have a lot to learn
from the people who are trying to understand the general principles behind
acceptance of groupware systems (of which shared libraries of reusable
components could be an example).

Scott McGregor
mcgr...@atherton.com

Lawrence Detweiler

unread,
Jun 27, 1990, 12:37:44 AM6/27/90
to
-----

>Actually I've always thought the best analogy for software development
>is movie production. It shares many similarities.

One of which is that some is B-grade and some is art. Unfortunately,
what determines the success of a program or a movie is not based solely
on its internal (the code) and external (the interface) aesthetic appeal.
They are vulnerable to the degrading effects of preoccupation with the
bottom line.

If software development is like movie making, I have had the thought
that programs are exactly opposite to Hollywood sets. A set has a
flashy, majestic appearance from the front, but from behind there is
nothing substantial. In a program, few have any idea of the subtle
and magnificent webs of intricate interactions that lie between the
choreography of twirling electrons in its wires to the parade of
photons meeting our eyes. What complexities lie in a program that
is mistaken by the user to be lying dormant! A program is an
inside-out set. The only similarity between the two is that the
viewer is blissfully unaware of some astonishing aspect.

Of course, the ideal programs of the future _will_ be more like a movie
sets, so that my standard of Excellence in Software can be more
readily achieved:

When the users say "there's more?!" and the programmers say "that's it?!"


ld23...@longs.LANCE.ColoState.EDU

m...@daisy.ee.und.ac.za

unread,
Jun 27, 1990, 8:41:37 AM6/27/90
to
cgr...@hemlock.Atherton.COM (Scott McGregor)
Date: 26 Jun 90 02:44:21 GMT
Organization: Atherton Technology -- Sunnyvale, CA
Message-ID: <25...@athertn.Atherton.COM>
Newsgroups: comp.software-eng

In article <31...@cup.portal.com>, cliff...@cup.portal.com (Cliff C
--- QM v1.00
* Origin: Bink of an Aye - Portland, OR US - PEP/V32 (1:105/42.0)

Ed Robertson

unread,
Jun 27, 1990, 10:54:58 AM6/27/90
to
+-Concerning Is Programming R&D or Production?, Lawrence Detweiler said:
|
| >Actually I've always thought the best analogy for software development
| >is movie production. It shares many similarities.
|
| If software development is like movie making, I have had the thought
| that programs are exactly opposite to Hollywood sets. A set has a
| flashy, majestic appearance from the front, but from behind there is
| nothing substantial. In a program, few have any idea of the subtle
| and magnificent webs of intricate interactions that lie between the
| choreography of twirling electrons in its wires to the parade of
| photons meeting our eyes. ... A program is an inside-out set.

Unfortuantely, I've seen many "programs" (perhaps they could be called
"systems", but to use either word is to defame those professionals who
are true programers and system-builders) which remind me exactly of
Hollywood sets. These things are "database applications" written,
without proper design, for PCs.

What has happened is that a variety of tools, such as dBASE and Paradox,
have nice "interface builders" which, in the wrong hands, become mere
"facade builders." People see menus, multi-colored displays, and
whiz-bang functions keys and perceive an effective system, even though
there may be "nothing substantial" behind.

It used to be the case that printing on 11 by 14 edge-punched paper
had authority because it was computer output. Now that same mystique
has moved to the input side too.
--
Edward Robertson robe...@cs.indiana.edu
Computer Science Dept
Indiana University 812-855-4954
Bloomington, IN 47405-4101

Tom Thomson

unread,
Jun 27, 1990, 2:22:03 PM6/27/90
to
In article <268556...@petunia.CalPoly.EDU> jdu...@polyslo.CalPoly.EDU (John R. Dudeck) writes:
>
>A lot of what is gained in OOD is in the de-sequentializing of our problems.
>Objects can be considered as independently executing processes. Because of
>the extreme decoupling between objects, the problem is broken down into
>more manageable pieces. This seems to me that it should make the problem
Every OO language I have seen makes this claim. However, every OO language
that I've seen (with one experimental exception, a laboratory toy language)
insists that the messages are SYNCHRONOUS, that the channels between the
processes are UNBUFFERED (and blocking), that every message has a REPLY
whose sending is implicit (in result delivery). In other words, the
languages require sequential execution, there is no de-sequentialising at
all, programs written in OO languages would cease to work if the compiler
and/or the execution system stopped enforcing sequential semantics.
Claiming OOD provides a lot of gain in de-sequentialising our problems is
pure bunkum.

The things that OOD really has got going for it are
Abstraction with strong encapsulation
Inheritance mechanisms to facilitate re-use
A reasonably expressive type system (sometimes)
all of which are principles that were well understood and practised by
competent software engineers (and language designers) long before the
wonderful phrase "OO" appeared and became the flavour of the month.
The only class of languages which generally provide an escape from
sequentiality are the functional languages (logic languages like prolog
have sequential semantics; languages like parlog, with moded arguments,
may avoid sequentiality so there's a subclass of logic languages that
provide the escape too). It's a pity that they don't provide decent
abstraction and inheritance mechanisms too - maybe a functional OO
language (a contradiction in terms?) would be the answer to our dreams?

Tom Thomson [t...@nw.stl.stc.co.uk

Will Raymond

unread,
Jun 27, 1990, 9:54:04 PM6/27/90
to

In all the conversations so far, I've yet to see anyone mention learning
the task you are trying to program. I am in the end days of a fairly com
plex project, writing a "user friendly (!?)" front-end to a DMS10
telephone switch.

My shop has a good repetoire of C routines, and we generally use them.
In some respects, we assembled the software from off the shelf components.

Half the time I've spent on this project ( 1 of 5 programmers spending
28 weeks ) has been learning the switch. The switch is essentially a
single purpose computer with 1.8 Megs. of executable written over a
15 year period. Our Technology group produced dozens of conflicting input
and output formats for switch commands, the documentation of which
encompasses 5 binders and 4,500 pages.

15 years of incremental programming produced many inconsistencies, both in
software and documentation.

The up shot, adapting our product to the switch took a long time not
because we couldn't assemble code from reliable building blocks, but
because we had to learn and relearn the task.


Will

"Because I care...." Dr. Moreau

Rob Kling

unread,
Jun 28, 1990, 12:16:38 PM6/28/90
to

Hi ....


I saw your note about PC database systems badly "designed" .......

I've been teaching an IS course which includes a segment on
database design, using paradox. (I also use Paradox for a variety
of small systems on a research porject I direct). Paradox has a
neat interface, but is awful in porviding suppport for porgram
development & documentation (e.g., lacks a good interactive editor
to facilitate documentation, and tools to locate x-refed variables,
etc.) I've talked with deisgners of mincomputer relational
products like INgress, and they claim that the lousy environments
behind the pC porducts are also typical of relational database amangers for
minis.

Products like Paradox, Dbase, etc. INVITE bad designs ...
even though I prefer them to C for writing database systems. (grin)


There are a number of books about Paradox & 3 million similar
books about Dbase, etc. These books teach the mechanics of the
systems, much in the way that pascal & C books teach about language
features rather than software design in language X.

On this campus, the administratyion is developing a number of
inofrmation systems in Revelation ... using student porgrammers who
are not trained in IS design .... you can imagine the quality of
the resulting porducts ....

I have yet to see a good book on the design of database systems
that is aimed at highly interactive products
(rather than at transaction-oriented
mainframe systems).

Have you seen anything of use for professionals or students
of this kind?

Rob Kling
UC-Irvine

Kirk Kandt

unread,
Jun 28, 1990, 2:26:21 PM6/28/90
to
In article <81...@tut.cis.ohio-state.edu>, og...@seal.cis.ohio-state.edu

(William F Ogden) writes:
|> In article <84...@jpl-devvax.JPL.NASA.GOV>
ka...@AI-Cyclops.JPL.NASA.GOV writes:
|>
|> ...
|> >Various mathematical software libraries are successful because
|> >components are reliable and easily identifiable. This same success can
|> >be had in other domains too! Someone just has to buy into the high
|> >start-up costs in the belief that the long-term benefits are there.
|> >Unfortunately, the procurement process of American society does not
allow this
|>
|> It should be noted that mathematicians spent several hundred wall
clock years
|> and countless man-years exploring the domains in which these libraries work.
|> This produced a relatively small yet powerful conceptual basis for these
|> domains together with an elegant notation for the concepts. The notation
|> was validated in innumerable applications, and the theory was elaborated
|> in great detail. Most importantly perhaps, the notation (and the most useful
|> part of the theory) was taught to most engineers and scientists. When
|> creating these libraries, programmers didn't have to discover the utility
|> of log, arctan, eigenvector, etc. Moreover, prospective users already
|> know the terminology used to identify and describe components in
statistical
|> or linear algebra packages, for example.

I claim that anything that we build -- a command and control system, a
flight reservation system, an operating system, and so on -- has a
conceptual basis; if they didn't we couldn't build them. The notation
for each of these domains can be described by a language which would
provide the syntax, semantics, and even pragmatics for the modeled
domain. Individuals can be easily taught this new "notation", it would
not be any more difficult to learn than another programming language, or 4GL.

|> The lack of a well developed and widely known formal basis for many other
|> computing domains may well prove to be a serious impediment to the
development
|> of reusable software.

True, but if a domain is that misunderstood then we are not building
implementations of them anyway.

I reassert that any artifact that we build can provide components for
later reuse. I also assert that if we have built a component we can
adequately describe it in a formal or informal notation so that its
complete behavior is understandable by a human. I also know that
providing such information for later reuse requires much labor at great
cost. This is the impediment to software reuse. The issue is "given
that you can describe a domain, how do build a information repository in
a cost-effective manner."

As an example, take the domain of data structures. It is a relatively
simple domain. It is well understood. There are both formal and
informal methods for describing modeling representations and storage
structures. We understand the complexity of them all, including special
cases and programming tricks. Yet to date, there is not one reusable
data structure depository. Why is that? Because the costs of putting
all the knowledge contained in existing data structure books and
articles would be incredible.

My thesis work was on managing design information for later reuse. I
took a couple of toy problems -- the Dutch National Flag algorithm and
quicksort -- and described them so that they could be later reused. The
amount of information obviously was subjective. But in each case it
took approximately 5-10 pages to describe them so that I felt someone
else could understand and reliably reuse them. After working on these
simple problems I came to the conclusion that the amount of effort
required to document complex components would be much greater that the
actual construction of said components.

As a result, I concluded that software reuse should only be attempted if
you are producing the same type of artifact over and over again. For
example, if you write 25 Ada compilers for different machines then
software reuse is feasible, but if you only write 3 of them then it
probably isn't. I also concluded that software reuse as many people
envision it is impossible unless some manner of automating the
acquisition of information is achieved. If someone is looking for a PhD
thesis, here it is.

Kirk

William F Ogden

unread,
Jun 28, 1990, 10:15:33 PM6/28/90
to
In article <85...@jpl-devvax.JPL.NASA.GOV> ka...@AI-Cyclops.JPL.NASA.GOV writes:

...


>I claim that anything that we build -- a command and control system, a
>flight reservation system, an operating system, and so on -- has a
>conceptual basis; if they didn't we couldn't build them.

One must be careful to distinguish between mathematical domain languages
and programming languages here. Generally, mathematical domain languages
being declarative and nonconstructive are more powerful than programming
languages. That makes them useful for concisely expressing precise
specifications for software components -- provided you can find a
suitable mathematical domain within which to represent a particular
software component domain. For example, it might be the case that the
reusable components for flight control systems could appropriately be
represented in the mathematical theory of partial differential equations,
or that those for airline reservation systems could best be described using
graph theory. If furthermore, there turned out to be a Find_Flights
operation among the airline components which `finds all flights from
airport A to airport B of duration less than T leaving on date D', this
operation would have a precise graph theoretical specification which
would answer any questions about what it would do. Note here that
mathematical domains and programming domains differ in that Find_Flights
belongs to the latter, but not the former.

If by a `conceptual basis' you mean a mathematical domain in which
components from a programming domain can be specified exactly, I expect
your claim, taken literally, is correct. The real problem, of course, is
to find an APPROPRIATE conceptual basis [mathematical domain] for describing
components. For example, choosing number theory to describe airline
reservation components (using say Godel numbers), while technically
quite feasible, would produce totally useless specifications.

I presume accordingly that your claim should be interpreted as postulating
the existence of appropriate mathematical domains for describing any
reusable components that will be created. That seems far from obvious.

> The notation
>for each of these domains can be described by a language which would
>provide the syntax, semantics, and even pragmatics for the modeled
>domain.

If one did not know how difficult it is to work out the concepts and
theory for a new mathematical domain, this could almost be read as
suggesting the creation of a theory for each programming domain.

> Individuals can be easily taught this new "notation", it would
>not be any more difficult to learn than another programming language, or 4GL.

Most of us find it fairly challenging to learn a new mathematical domain.
It's not, however, the 'notation' of differential equations or category
theory that gets to you; it's the insights about the concepts and the
theory that takes the time and effort to learn.

Unfortunately, a superficial knowledge of a mathematical theory won't
suffice for programming. A new age programmer who learned his number
theory from a pocket calculator might not know that a program that
sums up an array of integers from bottom to top gets the same result
as one that sums from top to bottom, even though he understands addition
perfectly well at a notational level.

/Bill

Cliff C Heyer

unread,
Jun 28, 1990, 10:06:28 PM6/28/90
to
[The following by Cliff Heyer who started this collection. I say this
just to help keep this collection "on thread"]
Brad Cox writes...

>The only historical datum I've found thus far to back this up is that
>armory and congressional records showed that really *was* far more
>expensive to build guns the new way. But only the producers seemed
>to care; the consumers didn't really care (much) because their priority
>had become easier repairability.
I have to clarify my opinion here. I FULLY AGREE that OOP has
advantages, and will someday yield benefits beyond what can
be had today.

It's just that with my work I came to the conclusion that
OOP is not of immediate benefit.

> I feel it is possible to
>seperate the design phase from the construction phase in most instances.

NO NO NO!

My point is that the "construction phase" is NOT really a construction phase
because you still have to DEVELOP. This is the whole problem! The term
"construction" implies a production schedule, which is NOT the case. With
software you are NOT assembling components, you are DEVELOPING new
components and a control structure to surround them. The terms

ARCHITECTURE and ENGINEERING

should be substituted for"design" and "construction" respectively.

> Most feel
>that a program of that size[30000 lines] doesn't justify going through
>the preliminaries due to the overhead.
I agree. By the time the formalities are done, you can half the program
finished.

>Are you kidding? Every engineering firm I ever have been associated with runs
>on deadlines! ...If you don't think that car manufacturers and TV makers work
>under deadlines, guess again.
My point is that these deadlines are by default flexible. You can't market a TV
that does not work, right? Therefore the deadline gets moved whether anyone
likes it or not. And I see evidence that those engineers are not pressured to the
same degree as programmers. Management knows that pressure = errors =
delayed completion = increased costs. (But then you have to weed out loafers
who take advantage of a non-production environment to not produce.)

>I don't think users of software components expect to just 'plug in' a bit
>of code for instant functionality (at least, I hope not!).
I'm getting sick of manager types who have never programmed (MBAs)
talk about OOP this way, as if it is the "final solution" to the software
problem.

> I could have
>done a paper design of the new phone line, but the time required compared to
>the size of the project didn't justify it. You can do the same with software,
>but you don't have to. It's not the only way.
A great observation/explanation. You have made me aware of
something I have wanted to explain to people but didn't know it. When
you have a 3 man month project, it's better just to start it than
spend 1 man-month spec-ing out every detail that a good programmer
will do instantaneously "on the fly".

>Programmers are really getting a raw deal in the
>>current scenario. That's why so many of them change jobs and are
>>generally disillusioned with their career.

> There seems to be a belief that if
>Joe Programmer says he can do it in 5 weeks, he can probably do it in 4 and
>with fewer resources than he claims. ...and that they never learn from the past
>so keep repeating the same mistakes over and over. They get pissed because
>software groups can never come up with an accurate estimate of time and are
>always over the time limits. Why don't they learn?
I think managers - especially MBA types - are (expectedly) insecure about the
"black magic" of programming. The only way for them to get security is
to make themselves feel that they "got it done as fast as possible", and
what a better way than to allocate LESS time than the programmer says
he needs.

Thus, by default they disrespect the programmer and treat him as if he is not
telling the truth. This is the core of how programmers are getting a raw deal.
They are not respected for the work they do - they are treated as if they are
dishonest. (I suspect many WERE dishonest, perhaps making a manager treat
all programmers that way.) They are disrespected for making estimates on
DEVELOPMENT which can't be accurately quantified (you are not assembling
pre-made components).
>This
>leads to nothing but heartburn for the programmers and engineers who are
>saddled with the project.
>Unfortunately, most upper management I have been exposed to couldn't care less
>about the state of employees. They aren't the ones that have to do the hiring.
>It's the line managers that are sucking down Rolaids and trying to figure out
>how to jump through the next hoop.
And they wonder about job burnout...

John Dudeck writes....

>A lot of what is gained in OOD is in the de-sequentializing of our problems.

Ahhh-I got you again!
How does de-sequentializing happen?? Not because of the "magic" of OOP.

Old command line interfaces had HUGE amounts of code to tokenize strings and
then conditionals to figure out the command before executing the appropriate
subroutines.

All OOP has done is remove this layer, by having the user do the "IF" and
"TOKENIZE" by pointing to what he wants and clicking the mouse. In other
words, instead of using the computer to figure out the command we let
the user tell the computer the command by pointing to it and clicking.

No magic here, just the expected result of a change in hardware/
software architecture.

So I agree the de-sequentializing occurs, and this is one of the exciting
things to me about OOP. But I can't give OOP the credit for it. It's
a result of the MOUSE.

>Objects can be considered as independently executing processes. Because of
>the extreme decoupling between objects, the problem is broken down into
>more manageable pieces.

This is also just good programming practice. You don't need to have OOP
to do it, although OOP provides better facilities to implement it.

>This seems to me that it should make the problem

>of time estimating easier.
As long as you still have to devise steps of code to solve
a problem, the estimating problem is still there.

------
sco...@boulder.colorado.edu writes....

>I think this issue can be articulated in summary without taking up a few
>hundred lines of text (sorry Cliff!). What's really being said here is
>that the complexity of software engineering does not lie in programming

>languages or in methodologies ... it lies in the *domains* we try to apply computers
>to. . we apply software to many domains that aren't well understood yet
>....continues...the production line vision of general computer applications is


>nothing more than a dream.

You don't sound like you've been outside the academic community. Outside
you have to explain things in nuts and bolts to get your point across. This
means you must explain what is being done wrong (classify programming
as production) by comparison (assembly vs. development), and then
explain what should be done (architecture/engineering). People have
to understand a rational line of thinking to accept a point. You can't just
proclaim something using abstract concepts and hope to be accepted.

Scott McGregor writes...

>> YES! I find I always have to match the estimate my manager
>> suggests, because if I don't I'm in trouble. Actually this
>> makes estimating easy, except that I am still responsible
>> when the estimate is wrong.
>Well, I can understand that. But if you also say "no guts, no glory",
>what does this imply about the need for "backbone" in setting estimates
>and schedules that are reasonable?

I was talking about different situations. Actually I'm self-employed,
and the "manager" I spoke of was a client who always made
estimates for me. I gave him easy financing so he was able to
repay the 4X overbudget costs over several years. But If I
didn't accept his estimates, I'd have lost the work which was
particularly good experience for my career at the time.


>
>Ah, but you can, you can pay an ASIC shop to do this for you. The difference
>is that the percieved cost in starting down this path to nonstandard
>solutions is high.

Yes!! Here hardware engineers can be faced with the same choice
as software engineers: the temptation to clean up a discrete logic
board by creating an ASIC to get higher performance perhaps and
encapsulation, etc.


I thank Scott McGregor very much for his helpful comments. I'm
sure if I had more experience in managing others (rather than only
myself) I perhaps would not have the urge to blow off so much
steam. Not all problems have easy answers. Also, now I say
"I know I can program it, but I don't have the experience with
it to estimate the needed time accurately." Experience IS key.

I guess the real reason I've gone through this exercise is to develop
a way of explaining to my clients "why" software can't be done on
an accurate schedule, such as be explaining lack of standardized
components, developing rather than assembling, etc. Previous to
this I was by default a "bad estimator" because of my inability to
explain the situation in laymans' terms to "non computer" types.
With no defense the offense wins.

Trip Martin writes....


>Actually, it was the consumer who demanded interchangeable parts for

>guns. ...The government agreed and poured lots of money into achieving
>that goal.
I'm thinking of how AM STEREO got messed up because the government
didn't come in and standardize. Now we have three or so types of
AM stereo and much more complicated decoder chips in the radios
to detect and decode each kind.

COLOR TV's introduction was smooth because of the decision to
support one format.

(Actually a side issue I've wanted to discuss
is after you see a TV in a big TV station, you'll wonder why they
are developing HDTV. Upgrading the current system to give non-interlaced
60Hz NTSC TV studio quality at home for less than $25,000 will be all
the HDTV we can use for a long time. What newsgroup would such a
topic be discussed?)

What if the FCC begins to regulate local area networks like they
regulate the phone system, and proclaimed X-Windows to be the standard?

Perhaps we could get on with life and do bigger and better things
rather than waste man hours creating dozens of different GUIs?
(Anyone dare to tackle this one?)

Scott McGregor writes...

> The problem is that people don't reuse
>other people's code as easily.

Sometimes it takes less time to write something yourself than
to understand another program enough to predictably make use of
it. Sometimes it takes a thesis to explain something you can
do in a relatively short time.

> Rather I
>believe that the most important reasons that reuse of other people's
>code fails is that it goes against some of the psychological rewards
>that programmers want.

You mean some programmers just have use their own code to "do
it right".

>The importance of human psychology on this problem is little appreciated,
>but is greater than we may think.

We have to look at how people get satisfaction in their work. If a programmer
gets satisfaction from "using his own code" then obviously he will always
resist using components. The activities by which a person gets satisfaction
can directly contradict the activities needed to produce acceptable work.
(eg. drug addiction an extreme example). The need for a manager who is
sensitive to and can interdict destructive satisfaction-getting cannot
be overstressed. The same problem can occur with managers, for example,
if a manager gets satisfaction from saying humiliating things to subordinates
who don't know enough to fight back.

Edward Robertson writes...


>What has happened is that a variety of tools, such as dBASE and Paradox,
>have nice "interface builders" which, in the wrong hands, become mere
>"facade builders." People see menus, multi-colored displays, and
>whiz-bang functions keys and perceive an effective system, even though
>there may be "nothing substantial" behind.

:-):-):-):-):-)
I'll second that!

Tom Thomson writes...

>there is no de-sequentialising at
>all, programs written in OO languages would cease to work if the compiler
>and/or the execution system stopped enforcing sequential semantics.
>Claiming OOD provides a lot of gain in de-sequentialising our problems is

>pure bunkum.....
your brutal!


>all of which are principles that were well understood and practised by
>competent software engineers (and language designers) long before the
>wonderful phrase "OO" appeared and became the flavour of the month.

and before any of us would have thought of writing a book about it. We
just thought that was the way you were supposed to program.

Will writes...


> In all the conversations so far, I've yet to see anyone mention learning
> the task you are trying to program.

That's all lumped into the design/architecture phase in our discussion. (I think)

>I am in the end days of a fairly com
> plex project, writing a "user friendly (!?)" front-end to a DMS10
> telephone switch.

You know, I think there should be MUCH MORE ATTENTION to the fact
that phone technology is the same as local area network (LAN) technology.
People should be aware that ever since the late 60s long distance calls
have been digital. Back then, it took 64KB/sec to digitize a channel,
now it takes about 8KB/sec. A typical low-speed inter-office trunk is
a T-1 which runs at 150KB/sec - the speed of a typical PC disk drive!!
Imagine - you could have your disk drive connected over a T-1! And they
did this in 1970!

Note that you may NO LONGER have to send a "plumber" out to "tap" a local loop
to bug a phone! Now all you have to do is install a program that stores all the
digital data from a port (a phone number) and save it on the hard disk in the switch.
Then the program can down load this data at 150KB/sec or 650KB/sec (T-3) or
45MB/sec on a D channel to ANY switch in the country! That is, with a switch in
California you can down load this program to a switch in Maine which will "tap" the line,
and then get the data the next day in California and "listen" to it. Big Brother
Is Watching! (I read this in an MIT student publication). I heard this type of software
is "built into" the switch to test it, but it of course can serve a dual purpose
for law enforcement. A conversation takes 28MB/hour to store but can be pumped
over a D channel in only 1/2 second! And digital switches are permeating small
town America where some of the local loops are still on '50s open wire with glass
ball insulators but are connected to the real world with T-1 digital trunks.
What a range of technology!!

PS WHAT newsgroup should the above phone discussion go into?????

> The switch is essentially a
> single purpose computer with 1.8 Megs. of executable written over a
> 15 year period.

An example of where OOP won't work, without starting over.

Rob Kling writes...


>I have yet to see a good book on the design of database systems
>that is aimed at highly interactive products
>(rather than at transaction-oriented
>mainframe systems).
>
>Have you seen anything of use for professionals or students
>of this kind?

Nope. Maybe you should write one! It would be some amount
of research to summarize what is bad and good, this type of
thing is so subjective.

Cliff

Scott McGregor

unread,
Jun 28, 1990, 4:17:14 PM6/28/90
to
+-Concerning Is Programming R&D or Production?, Lawrence Detweiler said:
|
| >Actually I've always thought the best analogy for software development
| >is movie production. It shares many similarities.
|
| If software development is like movie making, I have had the thought
| that programs are exactly opposite to Hollywood sets. A set has a
| flashy, majestic appearance from the front, but from behind there is
| nothing substantial. In a program, few have any idea of the subtle
| and magnificent webs of intricate interactions that lie between the
| choreography of twirling electrons in its wires to the parade of
| photons meeting our eyes. ... A program is an inside-out set.

I think that this characterization underappreciates the technical
aspects of a set. In order to produce just the right shadows extra
lights may be moved in and hung. Wind machines, and rain machines
may be used. Cameras and equipment have to be positioned in just the right way,
not only to get the right feel, but to make sure lights and microphones
don't show up in edges of the frames of each shot.

A set is more than false front buildings just as a program is more than
its window display. When both are stripped from their context they both
appear flashy, majestic in front and insubstantial from behind. But
in their contexts (a movie shoot or program) they are both subtly beautiful
in management of "magnificent webs of intricate interactions" and
choreography.
While set designers and programmers find great beauty in their designs, they
also acknowledge ugly masses of unfortunate compromises to reality and market
conditions that are required.

More importantly, to the people on the shoot, the set has a "meaning"
different from the meaning that the audience feels when the watch the
movie. To those who participated, the meaning is in the telling of
the tale, to those who view it is the tale itself. Similarly, to
programmers, the "meaning" of the system is intimately tied to the
internal algorithm and data represenation choices that combine to
"solve" the users problem. To the user the meaning is merely the
problem solution itself. A good set is "opaque"--it hides the reality of
the movie production process from the viewer who is thus able to suspend
disbelief and temporarily live in the apparent virtuality being displayed.
A good program can do the same--it can hide complexity from the user who
doesn't
want to know HOW it was done, and provide them a simplified model that
they can treat as if it were real. Seeing a program as an inside-out set
may provide some valuable insights, but it misses much of what can be
learned from another field that has much to do with managing complexity on
large scales and molding abstractions into concrete realizations.

Scott McGregor
mcgr...@atherton.com

Brad Cox

unread,
Jun 28, 1990, 12:11:41 PM6/28/90
to
In article <31...@cup.portal.com> cliff...@cup.portal.com (Cliff C Heyer) writes:
>I think focusing on making the workplace a happier place by looking at
>psychological issues, etc. would prove more fruitful for productivity than
>OOP ever will. But participation in the psychology USENET newsgroup is
>at an all-time low.

You might be interested in how certain organizations are managing to beat
our butts in terms of quality and time to market by *one to two orders
of magnitude).

Average of 2-3 programmers per terminal

Obsolete software technologies (assembler often)

200-300 desks per room, side by side, managers at the end of each
row.

Workdays average 10-14 hours/day.

The organization is, of course, a Japanese software development
organization such as Hitachi, NEC, etc. These are a quick summary from
memory of a workshop organized by Victor Basili and Colonel Will Stackhouse
at Univ. of Md on Tuesday of this week.

Of course this is not necessarily to disagree with your feeling that
psychology is what's different here. It is only that the word psychology
tends to be used to argue that the solution is reclining chairs and
more window space, rather than the Japanese 'secret'; a devastating
sense of *shame* on being found responsible for a bug that diminishes
the group's (closely monitored) performance with respect to other
groups, or (horrors) in the eyes of the customer.

lawrence.g.mayka

unread,
Jun 28, 1990, 8:15:29 PM6/28/90
to
In article <31...@stl.stc.co.uk> "Tom Thomson" <t...@stl.stc.co.uk> writes:
>provide the escape too). It's a pity that they don't provide decent
>abstraction and inheritance mechanisms too - maybe a functional OO
>language (a contradiction in terms?) would be the answer to our dreams?

A grand unification of functional and OO programming would probably
look a lot like the Common Lisp Object System: A generic function's
behavior would vary according to the class(es) of its argument(s).


Lawrence G. Mayka
AT&T Bell Laboratories
l...@iexist.att.com

Standard disclaimer.

Kimball P Collins

unread,
Jun 30, 1990, 12:50:22 AM6/30/90
to
Will the original "recap so far" poster please contact me? I sent
some email to him and I would like a copy of what I sent to him.
('Twas before I had my mailer save outgoing mail.)

Thanks.

--

Not representing Amdahl nor necessarily myself.

Andy Klapper

unread,
Jul 2, 1990, 3:28:56 PM7/2/90
to
>I reassert that any artifact that we build can provide components for
>later reuse. I also assert that if we have built a component we can
>adequately describe it in a formal or informal notation so that its
>complete behavior is understandable by a human. I also know that
>providing such information for later reuse requires much labor at great
>cost. This is the impediment to software reuse. The issue is "given
>that you can describe a domain, how do build a information repository in
>a cost-effective manner."
>
>As an example, take the domain of data structures. It is a relatively
>simple domain. It is well understood. There are both formal and
>informal methods for describing modeling representations and storage
>structures. We understand the complexity of them all, including special
>cases and programming tricks. Yet to date, there is not one reusable
>data structure depository. Why is that? Because the costs of putting
>all the knowledge contained in existing data structure books and
>articles would be incredible.
>

Smalltalk, Objective-C and I would be willing to bet other OOP and OOP
like languages, have a set of basic data structure objects (Sets, stacks,
...) including a sorted collection that uses quicksort. It would seem
that you have been looking in the wrong place.

I agree with you that reuse is hard. It costs much more to write code
that is really reusable and nobody gets it right the first time (even
if they have written reusable code before *in a different domain*.)
The places that I have currently seen the most software reuse (with
Objective-C anyway) has been in software shops where the work is clearly
defined and vertical (every year they produce a newer version of last
year's product (faster, better, stronger ...)) telecommunications,
financial, and industrial to name a few. In these shops the software
that is reused is their own. In one really big shop 15,000+ classes
the sharing of class libraries is across departments. In these
environments the 'information repository' problem is partially solved
by the fact that the next product team already knows what class libraries
they used in the last product they made and how they need to be tuned
(subclassed) for the current product. It is also solved by having
access to the person who wrote the class so you can ask her 'does this
do what I want ?' or 'what does this do ?'.

The Japanese are reusing software today, and getting a great deal
out of it. I do not think that they have found an easy way of solving
your 'information repository' question or my 'how to design for reuse'
question. I do think that they place a higher value on reuse and are
willing to put the time and effort into it because they feel that they
will get a higher return on this higher investment. The numbers that
I have seen quoted seem to back this up. 'There ain't no such thing as
a free lunch' or 'no pain, no gain' are my favorite quotes on the
subject.


--
The Stepstone Corporation Andy Klapper
75 Glen Rd. an...@stepstone.com
Sandy Hook, CT 06482 uunet!stpstn!andyk
(203) 426-1875 fax (203)270-0106

William Ricker

unread,
Jul 2, 1990, 10:38:10 PM7/2/90
to
t...@stl.stc.co.uk (Tom Thomson) writes:

>In article <268556...@petunia.CalPoly.EDU> [John R. Dudeck] writes:
>>
>>A lot of what is gained in OOD is in the de-sequentializing of our problems.

^^^


>Every OO language I have seen makes this claim. However, every OO language

^^^^^^^^^^^


>that I've seen (with one experimental exception, a laboratory toy language)
>insists that the messages are SYNCHRONOUS, that the channels between the
>processes are UNBUFFERED (and blocking), that every message has a REPLY

Who said OOD (Object Oriented Design) was limited to the design of one
program? I'd rather use OOD to design a system using asynchronous messages
between programs than any "functionally decomposed" SASD method.
Yes, Asynchronous tasks are good models for many objects, and OOPLs will
not be fully mature until the OOPLs supply this support naturally.

We used a synchronous OOPL with an ansyncrhonous operating system to implement
a system-level object that would respond asynchronously. We used old
objective-C (3.3) on early beta OS/2 LanManager! and it worked.

>Claiming OOD provides a lot of gain in de-sequentialising our problems is
>pure bunkum.

Confusing OOD & OOPL is confusion. OOD is a nice way to design asynchronous
systems. Claiming C++ is a suitable OOPL to make use of non-sequential
designs is bunkum. Claiming many OOPLs are desequentialized by nature is
bunkum. But OO, like functional, does not require a sequential model,
it's just [easier to start] implementing that way. <<exit jessica rabbit TM>>
I would be much less surprised to hear of a parallelizing C++ compiler than
a parallelizing C compiler.

[I won't try to start a feud on wether C++ (or Ada-- as
a friend calls it) is enough OOP for anything, but it's a good Lint,
which is more than can be said for any pre-ANSI C.]

What was the one lab-toy OOPL that you tought was not sequential?

By the way, this debate on sequential/synchronous v. Asynch probably needs
to be discussed relative to Brad Cox's (oh oh, grab asbestos, he's invoked
a NAME of DOOM) different scales of OO Integration --
IF OOSA and OOD are discussing the interfaces of whole subsystems or programs
rather than C++ classes or methods
(as I think Brad would say and I'd agree with)
THEN
It is natural that at these higher levels of abstraction the OO glue is
thought of, and should probably be implemented as, Asynch/nonblocking.
And
At finer levels of detail, it is natural to transition to Syncrhonous/
blocking semantics. (If this is below the HW/SW transition in your favorite
system, why lucky you! for the rest of us, we've got to make it happen.)
Thus,
At some level of integration, there must be a tool that maps between the
Asynchronous messages of one to the synchronous RPCs of the other & viceversa.
This may be an operating system with Sockets, Pipes, Messages; or a spiffy
linker, or an thing we'd call a programming language.

Cheeers,
Bill

<I hope my silly signature file appears here>
--
/bill ricker/
w...@wang.com a/k/a wri...@northeastern.edu
*** Warning: This account not authorized to express opinions ***

Brad Cox

unread,
Jul 2, 1990, 7:52:39 PM7/2/90
to
>In article <85...@jpl-devvax.JPL.NASA.GOV> ka...@AI-Cyclops.JPL.NASA.GOV writes:
:I reassert that any artifact that we build can provide components for
:later reuse. I also assert that if we have built a component we can
:adequately describe it in a formal or informal notation so that its
:complete behavior is understandable by a human. I also know that
:providing such information for later reuse requires much labor at great
:cost. This is the impediment to software reuse. The issue is "given
:that you can describe a domain, how do build a information repository in
:a cost-effective manner."
:
:As an example, take the domain of data structures. It is a relatively
:simple domain. It is well understood. There are both formal and
:informal methods for describing modeling representations and storage
:structures. We understand the complexity of them all, including special
:cases and programming tricks. Yet to date, there is not one reusable
:data structure depository. Why is that? Because the costs of putting
:all the knowledge contained in existing data structure books and
:articles would be incredible.

Of course, as you're no doubt aware, the costs of the software crisis
are even more incredible, and increasing fast as we bleakly trudge into
the Age of Information. In fact, if the Age of Manufacturing is any
indication, the costs of *not* solving it are likely to be a matter
of not only technical interest or even personal prosperity, but
national prosperity.

I share your enthusiasm for formal/informal methods of specifying
representations and storage structures, as well as your appreciation
for their costs. What you've called a depository, I'd prefer to call
a software components marketplace to emphasize that a successful solution
must be an active place where producers and consumers continually
interact, rather than a stagnant pool, database, repository, etc.

But such a marketplace brings humanity face to face with a problem
we've never faced before; managing a marketplace (i.e. a place of
competing and cooperative interests) in which the products are so
completely intangible, not to mention easy to rip off, as small
granularity software components. Therefore Stepstone has been actively
pursuing formal/informal specification/testing technologies for several
of years as a solution to the intangibility problem, which we view not
merely as a matter of technical or even commercial interest, but a
matter of global/national significance in the Age of Information.

I go into all this in greater detail in an article under review for
IEEE Software magazine, November 1990, titled "Planning the Software
Industrial Revolution; The Impact of OO Technologies". Send a mailing
address if you'd like a draft copy.

Kirk Kandt

unread,
Jul 3, 1990, 1:00:06 PM7/3/90
to
In article <53...@stpstn.UUCP>, an...@stpstn.UUCP (Andy Klapper) writes:
|> In article <85...@jpl-devvax.JPL.NASA.GOV>
ka...@AI-Cyclops.JPL.NASA.GOV writes:
|> >
|> >I reassert that any artifact that we build can provide components for
|> >later reuse. I also assert that if we have built a component we can
|> >adequately describe it in a formal or informal notation so that its
|> >complete behavior is understandable by a human. I also know that
|> >providing such information for later reuse requires much labor at great
|> >cost. This is the impediment to software reuse. The issue is "given
|> >that you can describe a domain, how do build a information repository in
|> >a cost-effective manner."
|> >
|> >As an example, take the domain of data structures. It is a relatively
|> >simple domain. It is well understood. There are both formal and
|> >informal methods for describing modeling representations and storage
|> >structures. We understand the complexity of them all, including special
|> >cases and programming tricks. Yet to date, there is not one reusable
|> >data structure depository. Why is that? Because the costs of putting
|> >all the knowledge contained in existing data structure books and
|> >articles would be incredible.
|> >
|>
|> Smalltalk, Objective-C and I would be willing to bet other OOP and OOP
|> like languages, have a set of basic data structure objects (Sets, stacks,
|> ...) including a sorted collection that uses quicksort. It would seem
|> that you have been looking in the wrong place.
|>
I've used these languages before -- having a basic set of objects is not
the issue. The issue is that the implementation of a "type" can be done
in a variety of ways, where each implementation has certain advantages
and disadvantages in terms of time and space. A set, for example, can
be implemented as a bit array, a simple array, a linked list, a hash
table, and so on. Smalltalk, Objective-C, etc. generally provide only
one implementation of an "abstract type". This one implementation is
chosen because it works reasonably well for all cases, but it may not be
optimal for a specific data set; in fact, it may be orders of magnitude
slower than one tailored for the characteristics of the data set.
Automatic data structure selection was a hot topic in the 70s because
people realized that there were tremendous pay-offs (in terms of
efficiency) by optimizing data storage and manipulation algorithms for
the expected data.

Ed Prochak

unread,
Jul 3, 1990, 1:48:01 PM7/3/90
to
I am way behind in this discussion, so I this point has been covered
go ahead and ignore my bringing it up again. One point that John mades
is that you should be able to pick objects and use them without regard
(at least initially) to performace:

In article <268556...@petunia.CalPoly.EDU>,


jdu...@polyslo.CalPoly.EDU (John R. Dudeck) writes:
>

> cliff...@cup.portal.com (Cliff C Heyer) writes:

[deleted stuff about outside interfaces]
> >
> >But nothing to change the fundamental problem: When you are programming
> >you are not assembling pre-engineered components. You are developing
> >an new sequence of steps to perform a task. This is not and will never be
> >accurately quantifiable. (I'm just waiting for some manager to give me
> >his logic why this is wrong...and post it)
>
> I would like to see some managerial-level input on this topic, too!
>
[deleted psychology stuff too]
>
> >With objects, you may have to tune that internal algorithm for a specific
> >task to save CPU time. Oh, but wait - you can't do this because this
> >would violate the fundamental principle of objects. Why should you not be
> >able to take advantage of a flexibility that software affords over
hardware???
>
> The idea is that you shouldn't have to be concerned unless you want to be,
> about such aspects as algorithm fine-tuning. Certainly I wouldn't advocate
> that algorithms shouldn't be fine-tuned to an application. I would say
> that if you can show that an algorithm used in an object in your application
> is the cause of performance that does not meet your requirements, then you
> should find the sourcecode of the object and fine tune it, or else otherwise
> come to a solution.
>
> >>But that does not
> >>mean that every time I go to write a program I will write every line of
> >>code from scratch. If there is a way that I can use well-made components,
> >>so much the better. Then I can spend my energy working on the real design
> >>issues of the project.
> >I agree, but my experience with the "real world" makes me suggest this
> >to be impractical.
>
> You're probably right for now. Whether or not this continues to be
> impractical in the future will depend on how well the technology
> catches on.
>

I haven't done any OOP yet, so my comments are based on reading this
newsgroup and Abstract Data Types ideas.

John assumes access to the source code. This is unlikely for
objects purchased from a software vendor. It seems that for there to
be a component marketplace, there must be several vendors of the same
or similar objects. How will the vendors promote their product against
a competitor?

Consider a market for software components with two types of customers:
1 mainframe system users (or at least systems with virtual menory)
with large processing requirements.
2 micro users with embedded systems applications.

Both need some graphics widgets with essentially the same properties
except that
-users in group 1 expect fast response for many on-line users
-users in group 2 need to have a known maximum memory requirement.

Vendors may develop the widgets with various peformance properties,
one vendor with a fast, memory hog widget and another with a slower,
but memory lean widget. Beyond those attributes, the widgets are
exactly the same. There may even be other vendors with widgets that
have performance, momory requirements in between the first two. The
users will buy the ones that match their requirements. And if the
requirements change, they can move to another widget, possibly from
another vendor. Hopefully, some interface standards will allow the
vendors components to interoperate.

I see it as very similar to the choice in hardware: should the
product used super fast logic like ECL? Or must it meet strict
power limits so CMOS is best? or something in between?
Even in regular TTL there are families: Schotky (S), Lowpower
Schotky (LS), Advanced LS (ALS), others?? with different power
and speed tradeoffs.
Note that hardware has interoperate requirements also,
for example signal voltage levels must be compatible.

My bottomline point is that tuning
may not require change source code.
It may just as likely involve swapping
one object with a similar one having
different performance attributes that
better match the application.

Does this sound close to being in the ballpark?


(Pardon the inconvenience during our remodelling of the signature file)
Edward J. Prochak (216)646-4663 I think.
{cwjcc,pyramid,decvax,uunet}!e...@icd.ab.com I think I am.
Allen-Bradley Industrial Computer Div. Therefore, I AM!
Highland Heights,OH 44143 I think? --- Moody Blues

Mark Paulk

unread,
Jul 3, 1990, 2:36:18 PM7/3/90
to
A colleague has requested some more info on the CRC design method
discussed a few weeks ago. I neglected to save any info on it, and
it has expired on our system. If anyone out there kept a copy, I'd
appreciate it if you would e-mail it to me.

Thanks in advance,

--
Mark C. Paulk m...@sei.cmu.edu

"Maturity is a function of scar tissue."

John R. Dudeck

unread,
Jul 3, 1990, 6:28:25 PM7/3/90
to

In article <53...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
>
>Stepstone has been actively
>pursuing formal/informal specification/testing technologies for several

^^^^^^^^^^^^^^^


>of years as a solution to the intangibility problem, which we view not
>merely as a matter of technical or even commercial interest, but a
>matter of global/national significance in the Age of Information.
>
>I go into all this in greater detail in an article under review for
>IEEE Software magazine, November 1990, titled "Planning the Software
>Industrial Revolution; The Impact of OO Technologies". Send a mailing
>address if you'd like a draft copy.

Exactly what do you mean by formal in relationship to informal?
Are you saying that there are both formal and informal methods used in
your specification and testing activities? Is there some guideline as
to what is to be done formally and what can be informal?

It seems to me that in any case where formal methods are used, there are
going to be many areas that are left to informal treatment.

My observation has been that methods and tools that allow the requirements
to be represented in a concise manner are essential. This implies that
there is an accepted "language" that is powerful enough to allow a concise
representation of a design. Some methods are more formal (read mathematical)
than others. It seems to me that if we are to make the intangibles more
tangible, we need to make advances in finding ways to represent our designs
in an informal yet concise and precise manner.

--
John Dudeck "I always ask them, How well do
jdu...@Polyslo.CalPoly.Edu you want it tested?"
ESL: 62013975 Tel: 805-545-9549 -- D. Stearns

Usenet News

unread,
Jul 3, 1990, 7:40:15 PM7/3/90
to
edu> <85...@jpl-devvax.JPL.NASA.GOV>
Sender:
Reply-To: William F Ogden <og...@cis.ohio-state.edu>
Followup-To:
Distribution:
Organization: Ohio State University Computer and Information Science
Keywords:
From: og...@seal.cis.ohio-state.edu (William F Ogden)
Path: seal.cis.ohio-state.edu!ogden

> .... The issue is "given
>that you can describe a domain, how do [you] build a information repository in


>a cost-effective manner."
>
>As an example, take the domain of data structures.

...


>My thesis work was on managing design information for later reuse. I
>took a couple of toy problems -- the Dutch National Flag algorithm and
>quicksort -- and described them so that they could be later reused. The
>amount of information obviously was subjective. But in each case it
>took approximately 5-10 pages to describe them so that I felt someone
>else could understand and reliably reuse them.

To a subsequent message about data structure objects in Smalltalk, etc.
he writes:

>I've used these languages before -- having a basic set of objects is not
>the issue. The issue is that the implementation of a "type" can be done
>in a variety of ways, where each implementation has certain advantages
>and disadvantages in terms of time and space. A set, for example, can
>be implemented as a bit array, a simple array, a linked list, a hash
>table, and so on. Smalltalk, Objective-C, etc. generally provide only
>one implementation of an "abstract type". This one implementation is
>chosen because it works reasonably well for all cases, but it may not be
>optimal for a specific data set; in fact, it may be orders of magnitude
>slower than one tailored for the characteristics of the data set.

...

Part of the solution of the problem of `managing design information for
later reuse' while still admitting myriad implementations is to recognize
commonalities (i.e. seek useful generalizations). For example, when viewed
properly, the description of quicksort factors into a portion which is
common to all sorting components and a portion that is peculiar to
quicksort. This first portion gives a conceptual description of the
functionality provided by any sorting facility as well as of the most
general types of objects on which it will work. Quicksort, mergesort,
heapsort, etc. are then described as realizations for this concept
rather than as autonomous entities. The descriptive portion of each such
realization of course includes particular realization conventions,
correspondences to the general concept, performance specifications, etc.
Similarly, one can identify the concept of a general set template which
admits array, linked list, hash table, etc. realizations -- each providing
the functionality specified by the set template, but differing in performance
characteristics.
Finding a suitable reusable component then involves first finding the right
concept, then finding the realization with the most appropriate performance
characteristics. The point is that there are at least an order of magnitude
fewer general concepts for facilities than there are useful realizations --
not to mention porcine ones.
Moreover, most reusable design information involves the general
concepts and not particular realizations, so the concept/realization
separation serves well here.
/Bill

lawrence.g.mayka

unread,
Jul 3, 1990, 9:16:02 PM7/3/90
to
In article <53...@stpstn.UUCP> an...@stpstn.UUCP (Andy Klapper) writes:
>Smalltalk, Objective-C and I would be willing to bet other OOP and OOP
>like languages, have a set of basic data structure objects (Sets, stacks,
>...) including a sorted collection that uses quicksort. It would seem
>that you have been looking in the wrong place.

Add Common Lisp to the list. Its set of built-in datatypes
includes exact rational numbers (with a numerator and denominator,
of arbitrary sizes), adjustable arrays, hash tables, conditions
(exception objects), generic functions (sets of methods with the
same selector), etc.

lawrence.g.mayka

unread,
Jul 4, 1990, 3:20:31 PM7/4/90
to
In article <85...@jpl-devvax.JPL.NASA.GOV> ka...@AI-Cyclops.JPL.NASA.GOV writes:
>table, and so on. Smalltalk, Objective-C, etc. generally provide only
>one implementation of an "abstract type". This one implementation is
>chosen because it works reasonably well for all cases, but it may not be
>optimal for a specific data set; in fact, it may be orders of magnitude
>slower than one tailored for the characteristics of the data set.

The Table Management Facility of Symbolics Genera offers automatic
data structure selection and mutation within the interface of a
Common Lisp hash table. That is, the physical representation of
the table - as association list, set, block array, etc. - depends
on the characteristics of the currently stored dataset (and on any
optional directives and hints supplied by the programmer).

Common Lisp itself sometimes offers multiple datatypes for similar
purposes (but with differing time/space tradeoffs) - e.g., the
representation of a set as either a list, a bit vector, or a
binary-coded integer. The choice of datatype must usually,
however, be specified explicitly by the programmer.

Andy Klapper

unread,
Jul 5, 1990, 9:50:43 AM7/5/90
to

This is true. I can think of two possible 'solutions',
1) Have 1 class (say Set) that either automatically selects an algorithm
based on the data. (Each object in the set would respond to some
message, say, optimizeFor, that would return how it wants to be
optimized. Above some mixture of requests the class would revert
to the basic method (you wouldn't want to de-optimize because 1 out
of a hundred objects in the set disagreed over how to optimize).)
The other option is to tell the class how to optimize and leave it
to the programmer to know what is best. Personally I'd do both,
but I always want every option :)

2) Create a number of classes (say of Set), each one optimized in
some way. Our ICpak201(TM) product comes with a class called
HashSet for exactly the reasons you stated above. The default
implementation was too slow because it was designed to given
good performance over most cases. The designers of ICpak201(TM)
needed a faster set for a limited case.

In Brad's world you could buy different Set classes from different venders
from a software catalogue and have all of the protocols the same, but the
implementation differ.

Jon Jacky

unread,
Jul 6, 1990, 1:48:38 PM7/6/90
to
A recent posting in this newsgroup reported some impressive findings about
Japanese software development practices and performance. I have other news
from Japan that presents a different view.

Here is the excerpt from the original posting by Brad Cox (c...@stpstn.UUCP):

> You might be interested in how certain organizations are managing to beat
> our butts in terms of quality and time to market by *one to two orders
> of magnitude).
>
> Average of 2-3 programmers per terminal
>
> Obsolete software technologies (assembler often)
>
> 200-300 desks per room, side by side, managers at the end of each
> row.
>
> Workdays average 10-14 hours/day.
>
> The organization is, of course, a Japanese software development
> organization such as Hitachi, NEC, etc. These are a quick summary from
> memory of a workshop organized by Victor Basili and Colonel Will Stackhouse
> at Univ. of Md on Tuesday of this week.

I forwarded this posting to a friend of mine, a graduate student in computer
science who is spending the summer at the Tokyo Institute of Technology,
studying Japanese software development practices. He in turn forwarded the
message to a colleague who works at Hitachi. Here is that colleague's
response:

------------------

I'd like to ask are Japanese really beating US on software development.
My current is "no." Even if we focus on the productivity of Japanese
programmers, often I hear the estimate of 10 lines (high-level
programming language) of debugged code per day per programmer.

Here is my feedback on the claims made:

Average of 2-3 programmers per terminal

--- This situation is no longer true.

Obsolete software technologies (assembler often)

--- Assembler? You gotta be kidding.
At any rate, they are agressively importing
software development tools.

200-300 desks per room, side by side, managers at the end of each
row.

--- This is physically impossible. How big a room do
you need to fit 200 to 300 desk per room?

Workdays average 10-14 hours/day.
--- Yeah. Regardless whether you are Japanese or
non-Japanese, you tend to write codes that you
don't want to see the next day after working
14hurs. Then how does longer working day add
to increase in productivity? Also you need to
remember that fair number of these hourse are
spent on administrative tasks, much more so
than in US.

I don't have a good feel for the reliability of Japanese software, but
as far as their quality is concerned, I don't think they are well made.
Remember the quality is not just reliability but also includes the design
of the system. In this area, I think they lag well behind. The problem
analysis and solving ability of average Japanese programmers is not that
good in my opinion.

This is my assessment on the situation. What I'd like to ask
this guy who wrote the original article is that if Japanese software
houses' quality is that good, why doesn't he just order out his
company's development to Japanese? Wouldn't that make more business
sense?

(end of response from Japan)

- Jon Jacky, University of Washington, Seattle j...@gaffer.rad.washington.edu

M.Marking

unread,
Jul 7, 1990, 6:43:10 PM7/7/90
to

j...@cs.washington.edu (Jon Jacky) writes:

) A recent posting in this newsgroup reported some impressive findings about
)...
) I forwarded this posting to a friend of mine, a graduate student in computer
) science who is spending the summer at the Tokyo Institute of Technology,
) studying Japanese software development practices. He in turn forwarded the
) message to a colleague who works at Hitachi. Here is that colleague's
) response:
) ------------------
) I'd like to ask are Japanese really beating US on software development.
) My current is "no." Even if we focus on the productivity of Japanese
) programmers, often I hear the estimate of 10 lines (high-level
) programming language) of debugged code per day per programmer.

) Here is my feedback on the claims made:

) Average of 2-3 programmers per terminal
) --- This situation is no longer true.

Well, when you work in a group that thinks 2-3 terminals per programmer,
the U.S. seems still somewhat ahead, even if Japan gives their engineers
a terminal each. Hardware *is* more expensive in Japan, almost any way
you measure it. It costs half again as much to buy an equivalent PC in
Japan compared to the U.S., and some commodities (certain types of network
hardware, 300+ megabyte drives, and so on) are difficult to find (and
afford).

) Obsolete software technologies (assembler often)
) --- Assembler? You gotta be kidding.
) At any rate, they are agressively importing
) software development tools.

Mixed reviews here. By U.S. standards, Japan is behind. But there are
two sides reasons for this. The first is that they are simply behind,
especially in networks. The other is that Japan values some technologies
differently than the U.S. Japan produces very good stuff in certain
areas - I'm referring to final products, not to prototypes or research -
such as automation, artificial intelligence, entertainment, and certain
types of networking. Americans don't think it's important to build washing
machines that contain fuzzy logic and dynamically adjust the wash and
rinse cycles to the changing dirt and solvent levels in the clothes,
so they tend to ignore such "practical" things.

) 200-300 desks per room, side by side, managers at the end of each
) row.
) --- This is physically impossible. How big a room do
) you need to fit 200 to 300 desk per room?

Things *are* more crowded in Japan. One of the reasons portables and
laptops sell so well there is that they take up less room on desks.
They are bought even when there is no intention of carrying them around.
I've never seen 200 desks in a room, but fifty or a hundred isn't
unusual. It isn't the norm, either.

) Workdays average 10-14 hours/day.
) --- Yeah. Regardless whether you are Japanese or
) non-Japanese, you tend to write codes that you
) don't want to see the next day after working
) 14hurs. Then how does longer working day add
) to increase in productivity? Also you need to
) remember that fair number of these hourse are
) spent on administrative tasks, much more so
) than in US.

There is some truth on both sides here, and I still can't make up
my mind if the longer hours and greater dedication and commitment
actually yield meaningful gains.

) I don't have a good feel for the reliability of Japanese software, but
) as far as their quality is concerned, I don't think they are well made.
) Remember the quality is not just reliability but also includes the design
) of the system. In this area, I think they lag well behind. The problem
) analysis and solving ability of average Japanese programmers is not that
) good in my opinion.

Again, it depends on the value system. If the tool does the job - or,
rather, if it makes money...

Consider the lowly cash register. Most people (that can include us
software types) don't think much about such commodities, but there are
a lot of cash registers ("point of sale terminals") that are running
multitasking, real-time operating systems with disks and graphics and
local area net support. Do we or the store owner or almost anyone else
care if the code inside is structured or spaghetti or in assembler or
COBOL (yes, there are cash registers running COBOL) as long as the right
barcode gives the right price and it doesn't bill the wrong amount to
our credit cards?

We can't reasonably separate the software from its context. I don't
think it especially relevant if a product contains elegant code if such
elegance results in the failure to satisfy the customers' needs. In
this respect, Japanese software engineering is healthily pragmatic. While
Japan improves its techniques (it is gaining on the U.S.) it is
making money along the way. Think about that the next time you use
your computerized Japanese camera, your computerized Japanese automobile,
or your computerized Japanese fax machine or copier: do you really care
about the code inside?

As a mathematician, I appreciate elegance and simplicity. As an
engineer, I appreciate the ability to solve problems.

So I don't think there is a fair answer to the above without defining
"problem solving ability" or even specifying the goal. I don't mean
the above argument to imply that some other point of view is wrong, but
rather to show how subjective it all is.

When all else is equal, the *real* threat to the U.S. software industry
from the Japanese software industry will arise because U.S. programmers
don't know how to work in groups as well as the Japanese. As technology
and industry progress, there will be more and more large projects,
things that weren't feasible in the past. All of this wonderful stuff
like CASE and reusable code and the like is extremely valuable, but
it doesn't eliminate the need to work with others.

) This is my assessment on the situation. What I'd like to ask
) this guy who wrote the original article is that if Japanese software
) houses' quality is that good, why doesn't he just order out his
) company's development to Japanese? Wouldn't that make more business
) sense?

That is exactly the response that is wrong. I suppose there are a few
"pure" programming projects out there, but most of the work being done
(and needing to be done) involves understanding the environment in which
the software will be used. I don't care what the myth says, it's
unusual to be able to specify a problem, then give it to someone else
to code. Developing software is an iterative, interactive process,
requiring effort by the user as well as by the developer.

Roger Meunier

unread,
Jul 9, 1990, 11:48:50 AM7/9/90
to
In article <18A...@drivax.UUCP> mar...@drivax.UUCP (M.Marking) writes:

>We can't reasonably separate the software from its context. I don't
>think it especially relevant if a product contains elegant code if such
>elegance results in the failure to satisfy the customers' needs. In
>this respect, Japanese software engineering is healthily pragmatic.

This is where the Japanese shine. They like to develop software
products which provide *all* the functionality a customer needs. They
listen to the customer and make lots of revisions. The internal code
limits the inclusion of new features at times, but for the most part
this is not seen as a major problem. *New* (to Japan) software development
practices (of the modular sort) are helping to allow faster and cleaner
revisions.



>When all else is equal, the *real* threat to the U.S. software industry
>from the Japanese software industry will arise because U.S. programmers
>don't know how to work in groups as well as the Japanese.

I've experienced the opposite. Japanese work by concensus, which often
boils down to taking the greatest common denominator among the peers.
This tends to bog things down, and prohibits the tackling of large
system design in a timely fashion. It also makes introduction of new
technologies difficult, because there are always desenting voices
which talk about such things being too risky.

The *real* threat is hidden in your next thought:

> As technology
>and industry progress, there will be more and more large projects,
>things that weren't feasible in the past. All of this wonderful stuff
>like CASE and reusable code and the like is extremely valuable, but
>it doesn't eliminate the need to work with others.

Although CASE is still a lot of hot air, the idea that technology is
progressing and becoming increasingly *proven* in the marketplace will
aid the Japanese more than anyone else. Once an idea is *proven* to
work, the *risk* disappears and the Japanese adopt the *new* technology
overnight. It is clever *management*, not working in groups, which
determines the success of the Japanese software industry. The people
who pull the strings here are making good, sound management decisions.

Once the Japanese see that something works and is *profitable*, they
come up to speed quickly, often by brute force (long hours, subcontracting,
etc.). They may not be on the cutting edge of software technology,
but they know how to take advantage of accepted methodology.

>That is exactly the response that is wrong. I suppose there are a few
>"pure" programming projects out there, but most of the work being done
>(and needing to be done) involves understanding the environment in which
>the software will be used. I don't care what the myth says, it's
>unusual to be able to specify a problem, then give it to someone else
>to code. Developing software is an iterative, interactive process,
>requiring effort by the user as well as by the developer.

Again, this is where the Japanese shine. Although there is a lot of
redundancy and inefficiency along the way, they understand the user's
needs and produce *usable* products. The coding may not be efficient
and the project not well coordinated, but the end result is a product
which sells.

How do we judge software engineering? By profit-and-loss statements,
user satisfaction, or by some internal standards which the *real*
world never sees, like code reusability, modularity, maintainability,
etc.? The Japanese, at least at this juncture, are less concerned
with "pure" programming than turning a profit. That's sound business
practice here, and I think holds true in the West, also. But for
those of us who have to design and build the products, we know there is
a better way; it's just hard to get upper management to give the
resources to get the job done *right*.


--
Roger Meunier @ Zuken, Inc. Yokohama, Japan (ro...@zuken.co.jp)

Cliff C Heyer

unread,
Jul 8, 1990, 5:48:13 PM7/8/90
to
This was supposed to go to

Is Programming R&D or Production?

Cliff

0 new messages