Then I got thinking, "well do they really do it?"
I did some research, and came up with the following comparison:
(Please pardon my use of generalizations below, and attempt to get
to the essence of what I am saying. I don't have time and the net
does not have BW for me to transmit a book on this subject.)
Summary of Steps Needed To Create a New Building:
STEP DESCRIPTION
1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available supplies.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Mechanical and systems design on paper:
Per Diem Expense. Every nut, bolt, pipe, wire, screw is accounted
for.
Elevators, electrical systems, mechanical
systems, plumbing systems. Heart, pump,
valves, veins that make body tick.
An accounting is made for what each item is and
where it is used.
3. Construction Turn paper into reality on time and on budget
Fixed bid expense.
Summary of Steps Usually Used to Create a New Software Product:
STEP DESCRIPTION
1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available libraries.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Never done.
3. Construction Construction person(s) asked to make an
Fixed bid expense.estimate on the architectural rendering.
Major systems are sketched out,
but numerous details are overlooked
which cause the estimate to be far off.
The "engineering" is done during
"construction" at a fixed price,
with the "construction" person paying
the difference if the estimate is wrong.
Summary of Steps That Should Be Used To Create a New Software
Product:
STEP DESCRIPTION
1. Architecture Drawing of outside appearance, models.
Per Diem Expense. Major design elements.
Detail drawings of appearance:
Skin, face, hair; cosmetic portions.
Structural engineering:
Make sure that cosmetic aspects can be
affordably constructed with currently
available libraries.
Consult with Construction people to
compromise between materials and cost.
2. Engineering Systems design. Designed by entering
Per Diem Expense. design (program) into a computer and
using it to make it work as per
architectural specifications.
IMPLEMENTATION OF EVERY FEATURE IS
SPECIFIED DOWN TO EACH LINE OF CODE -
This is the definition of "engineering".
If every line of code is not specified,
it is part of "Architecture" above.
Windowing system, data base engine
system, interrupt trapping system,
context sensitive help system, drop
down menu system.
3. Construction No construction phase with software,
process has been finished when the
engineering phase is complete. You
have a computer printout of the program
which is an "on paper" design.
What I found that when I am begin asked to give an estimate, it
would be the same as if you asked a construction company to give
an estimate with no engineering work. Obviously this is not done.
So why are software persons asked to do it?
And who says you can estimate this in the first place? If you do a
100% engineering of the software product, you have already
finished it as a per diem expense. There is no construction left
to do, as once you have it on paper that is all there is to be
done.
I argue that if you design a software program to the same level of
detail that a building is designed to, YOU WILL HAVE TO WRITE THE
SOFTWARE.
But in the software industry, this somehow is never done. People
are always asked to give estimates based on appearance: report
formats, screen/window layout, etc.
And therefore, every software "estimate" based on the architecture
phase is bound to be flawed, and persons making such estimates
should not be chastised for any errors. They are at best educated
guesses.
This process of making an estimate without any engineering makes
the project cost much more, because assumptions are mode without
any programming being done. Several layers of bad assumptions
cause the wrong programming to be done when the "construction"
phase finally comes around, and then the bad assumptions are
discovered and hundreds of man hours are thrown out. If during the
engineering phase the code was actually written (at per diem),
these bad assumptions would be discovered before they become
layered and the wrong code would never be written.
One building is like another because a door hinge is a door hinge.
Different programs have different algorithms that are NOT the
same. They require R&D to develop, and I resent this being pushed
into and being classified as production for the convenience of
management or for any other reason.
When you have to figure out HOW to make something work, it's R&D -
plain and simple. Either you find someone who knows in and charge
it as a production expense, or R&D it with a person who has to
figure out HOW. You don't take a person who does not know HOW to
do something and classify them as production, yet this is done in
software all the time.
Then they hold the old carrot out there, "well just do the best
you can and there will be a big bonus for you..."
Determining HOW is not an estimatable expense. How much of the
time do programmers spend figuring out HOW to make something work?
How much of this time is allocated to production budgets?
Software is primarily an R&D business, and if people can't take
the heat of that kitchen they should get out, not change the rules
by proclaiming themselves in the production business to placate
their stockholders who demand earnings, and then demand production
performance from people doing what essentially is R&D.
With a building, the entire design is specified in the Engineering
phase.
With software, the entire design usually isn't known until the
project is finished because nobody seems to have enough experience
or foresight with software to accurately establish the design.
Then the engineering phase is often not done at all. The architect
gives his incomplete rendering to a "construction" person and asks
him for an "estimate". After the "construction" phase the estimate
is found to be far off, and the "construction" person is chastised
for not doing a good estimate.
In construction, the construction party NEVER gives an estimate
based on an architectural rendering! This is impossible! Yet in
software, this happens regularly. Department managers have to
stick to their pre-set budgets, so they avoid the per-diem
engineering expense and throw this responsibility on the shoulders
of the "construction" person who has no engineering design on
which to base and estimate. And if he DID have such a design, his
services would not be needed because the product would have
already been completed in the engineering phase.
People often speak of the problems of the software industry and
the man-years backlog. The first step to solve this crisis is to
admit what programming really is, instead of trying to fit a
square peg into a round hole.
In article <30...@cup.portal.com> cliff...@cup.portal.com (Cliff C Heyer) writes:
>I am often finding myself in a situation where I'm asked to give
>an estimate for programming time on some new an completely
>different program that I have no track record to estimate by. When
>my estimate turns out to be wrong, I am told "the building
>industry does it, so can you."
>
>Then I got thinking, "well do they really do it?"
>
[Analogy of software development and the construction industry].
I think that this article points out a lot of what is at the heart of
the "software crisis", namely that software development has been looked
at by most as being similar to other forms of engineering, yet somehow
we just can't seem to get our act together and make it work.
I think there is one fallacy in the analogy, and that is that software
development is not finished when the "engineering" is done. For one
thing, the high-level and low-level designs of a product may be done
concretely, before the actual implementation is done. Also, once the
programs are written and working, the product is still far from finished,
since there will need to be further debugging, refinement, testing, and
packaging before it is ready to go to market. (I'm assuming a commercial
product here. An in-house system may not have this fine of treatment).
It seems that the industry is starting to finally get a handle on the
nature of software development, realizing that as Cliff points out,
software engineering isn't like other kinds of engineering.
The nature of software development is such that the if we can have many
rapid feedback cycles, the result is a product that more closely meets
the needs of the end user. Rapid prototyping is one step in this goal.
Another step is the realization that we need to use techniques that
lend themselves to changes in the requirements and to the software that
is the product. The use of concise specification methods rather than
voluminous documents is a big help here. The use of well-encapsulated
designs is also critical.
There have been some models for software estimation (COCOMO model)
that take into account the margin of error in the earlier stages
of the project. As the project moves into the low-level design
phases, it becomes more accurate.
It is clear that if the software industry continues to operate as it has
for the past 3 decades, the problems of software that is expensive,
unsatisfactory, and that takes a long time to develop will contine to
plague the world.
Object-oriented design and programming is an effort to make software
engieering be more like other kinds of industries, by creating software
components that are building blocks, and that can be reused without
having to re-design them for each project.
--
John Dudeck "I always ask them, How well do
jdu...@Polyslo.CalPoly.Edu you want it tested?"
ESL: 62013975 Tel: 805-545-9549 -- D. Stearns
Designing software is more like creating a new antibiotic than it is
like building a bridge.
Ralph Johnson -- University of Illinois at Urbana-Champaign
I agree with Cliff that software developers do not tend to work designs
down to the same level of completeness that builders often do. However,
the complexity of the typical software design is usually much greater
than the typical construction project because constuction materials are more
standardized than software modules are. While Cliff suggests that software
designs be taken down to the line of code level, the analog of this is not
defining every bolt and where it goes, but perhaps describing the depth of
every groove at each point (external interface) and the precise molecule
by molecule composition of the material. This level of detail IS NOT
done on construction projects. Requiring it might reduce some structural
failure errors, but it would overburden the project with too much paperwork,
and testing (QA) time.
I believe that this is largely why such detail is not supported by
management on software projects. Moreover, because the engineering
costs are small
compared to the overall construction costs, people are willing to pay for
a "paper study" (i.e. the engineering design) before the construction
is started. It is okay to pay that small amount in order to get a
large amount of predictability in the more expensive later portions of the
project. However, in Cliff's suggested version there would be no later
expensive production phase, since the completed engineering phase goes
down to the level of a line of code and is completly runnable.
Unfortunately, that means that management has to totally committ to a
project even before the know what it will cost. That's unlikely to happen
in any group of risk-averse managers. At best, this might shift the
cost control and estimation into the architecture phase where it isn't
the production (now design phase) engineer's problem. But this just
shuffles the cost prediction problem to someone else it doesn't solve
it. For that reason, it may not be considered a win by management
responsible for the entire project.
The truth of the matter as to how building contractor's can predict
schedules better than software engineers is that they have a book
of expected times to do standard tasks. The tasks and materials
are pretty standardized and have been done by large numbers of
people so that reasonable maximum times have been observed. More
experienced contractors (i.e. 20+ years experience) often have their
own experienced judgements for deviation from the norms in the
standard contractor's book. They use this to further refine their
judgements. But new contractor's don't start at ground zero, because
they get to start with the book times. Also, note that the book times
tend to be reasonable MAXIMUM estimates, since the determining factor
is usually the quality and experience of the help, and contractors want
to be able to make money even if they have to work with a bunch of
apprentices. The contractors (and the laborers) can often make more
money by being able to beat the book times.
It is different in the software world. Most people are asked to make
an estimate without any "book" that they can consult. The tasks,
materials and skills of the people on the project can be more variable.
So new estimators don't do well. Experienced (20+ years) estimators tend to
make better estimates, but there aren't very many of them. And they still
have to deal with the greater variability of tasks, materials and workforce.
A last problem is that in software it is common to obscure the difference
between a schedule (i.e. a contract) and an estimate. A contract or schedule
needs to contain enough time (slack) to cover unfavorable time variances. A
contract or schedule should represent a worst case, but should be so
solid that other people can build their own commitments around it. This
rarely happens in software in this country, but it is very desirable.
An estimate on the other hand is how long you think it might take. An
estimate doesn't handle the worst case scenarios since you don't think
those are likely to happen. When they do, you change your estimate.
Estimates slip; schedules shouldn't--but when you confuse the two,
schedules do slip all the time and that is exactly what we see happening
around us. I've endeavored to constantly make a distinction between
schedules and estimates and this has required a lot of education of
my managers and my employees. But it has enabled my project teams
to keep an enviable track record on meeting their committments.
Unfortunately, I have found that many people not only turn in mean estimates
instead of schedules--they often turn in the "optimistic case" estimates
which are even more sure to lead to poor predictions. I've noticed that
people often turn in either the Cocomo mean times (typically) or the
earliest times. I've found that showing the huge variance between best
to worst case is effective in helping others to understand why estimates
are so different from schedules (or why schedules need to be so
conservative).
This leads to a final point. For many software projects time to market is
extremely important. Early entrants often make significantly more
money than late entrants. So there is the natural pressure (mentioned
above) to look for optimistic estimates. In contracting, the "slack"
is built in to the book estimates, and if you finish early you can
always find some way to add quality, or do special finishing touches
that will be desired by the customer. In software the early dollars
encourage us to eliminate the slack. But this is a mistake. Slack
is necessary because things don't ever go all right. Making investment
decisions on shaved schedules that won't be profitable if they go their
normal amount of time will drive a company out of business. Better is to
make a modest product that can be done quickly even given the slack time.
If it is profitable with the slack time, it can be even more profitable if
due to your clever people, and good management you get it done early.
But at least you won't go bankrupt on a small project slip
Scott McGregor
mcgr...@atherton.com
This is something that's been troubling me recently, especially as I've been
looking at object-oriented design & methods. There is a lot of emphasis on the
concept of "construction", the OO idea being that you "construct" out of
"components". I just suspect we are trying to take the wrong analogy in the
wrong direction.
The conceptual problem with software is that you don't actually construct
anything. Even when the program is a compiled executable binary, it is still
abstract, it is still _soft_. It only becomes something reasonably concrete
when a computer executes it to make it do something. In other words, the
computer does the construction automatically, and always has done (that's why
they were invented!). OO terminology actually fits in with this idea - the
software consists of classes which invoke constructors when they are executed.
The objects are the things which are constructed, and this only occurs at
run-time.
I think it is more helpful to consider the _objects_ as the components, not the
classes. Thus a repository of components is an OODB, not a software library.
So what is the software? Think of it as analagous to a detailed engineering
drawing. It contains all the information required to create a component which
has a given set of properties. As a program, the information is in a form
which enables the computer to construct the specified component. Whether the
form of this program is source text, linkable oject, or executable doesn't
matter; these are just transformations of the same thing.
What is _not_ simple transformation is creating the detailed drawing from a
specification. In hardware engineering this is the job of a skilled draftsman,
in software it is the job of the programmer. In both cases the skill is the
interpolation of detail which is not explicit in the specification, but must be
described before the final component can be constructed. So our programmers
are really doing detailed design work, not manufacturing goods (we've always
known that really, haven't we :-).
As an aside, this implies something relevant to the CASE debate. My perception
of CASE is that it's ultimate goal is to be able to generate code automatically
from high-level designs. If the programmer really is working as a draftsman
and generating detailed design, then this is not actually achievable. Writing
code _requires_ human skill, and cannot be totally automated. (I anticipate
flames from CASE fans - not that there seem to be many - for that one!)
I feel that the real reason software engineering has so many problems compared
to its hardware counterpart is the nature of programming languages. Try
comparing the characteristics of source code to a detailed engineering drawing.
The most important property of the drawing is that it still embodies the
original design is a visible way. The designer, even if he is not a draftsman,
can look at the drawing and tell almost at a glance whether it's what he
designed. Try doing that with source code! Vast amounts of money are now
being spent by the computer industry trying to reverse-engineer old source in
order to extract the underlying design. An old set of engineering drawings
are a lot clearer about what they represent.
Having committed a possible heresy in saying that OOD is not building software
from components, I should say what I think it really is, especially as I think
it is still the best way forward. Since I've said that programming is really
detailed design, it follows that OOP is design re-use, not component re-use.
Try this comparison:
Engineering drawing Programming
hand-drawing each design straight-line code
stencils for standard shapes function libraries
CAD with component database OO with re-usable classes
The OO programmer builds high-level class designs by using existing lower-level
classes, sometimes adapting them (via inheritance) if they are not exactly what
is required. The engineer using a CAD system builds his drawings in very much
the same way. Interestingly, CAD systems are probably one of the first areas
to which OO methods have been widely applied.
I could continue ad nauseam on this theme, but I'll save some net BW by saying
that my conclusion from this analogy is that improvements in software
engineering will only come about if programming languages are perceived and
developed to represent both design and implementation, rather than just being
implementation formalisms. Using OOP, the code for a class must be able to
describe the intrinsic properties of the ADT it represents in such a way that
the implementation details are _guaranteed_ to conform. Run-time checking of
assertions, in the style of Eiffel, is a pragmatic but not altogether
watertight method. Algorithmic analysis by the compiler is a more elegant
concept but far more difficult to do (any comments from experts in this
field?).
Have I said something contentious? I hope so. I'd like to see comments from
anyone with real hardware or building engineering experience; I might just
have been talking a load of bull.
--
Rick Jones You gotta stand for something
Tetra Ltd. Maidenhead, Berks Or you'll fall for anything
ri...@tetrauk.uucp (...!ukc!tetrauk.uucp!rick) - John Cougar Mellencamp
Several other analogies I've seen used
Proving a theorem (C.A.R. Hoare)
Writing a novel (Apple lead programmer)
Manufacturing ('Software Factories').
The one that seems most apt, and achievable, to me is
Plumbing
To wit, plumbers seldom do production. Every situation is new and uniquely
different since plumbing is a build-to-order business like programming,
not a production business like automobile manufacturing. There are
appreciable requirements, specification, design, prototyping, and
implementation phases. And most significantly of all, there is a robust
commercial marketplace that both limits and enhances the plumber's
"creative process".
Furthermore, notice the balance of power in the producer/consumer relationship
between plumber and homeowner. To wit, if a plumber proposed to work as
programmers to, by inventing everything from first principles, the homeowner
is empowered (since plumbing is a concrete business that everyone has the
reasoning skills to understand and exercise some influence over) to take a
definite stand. Few managers and software consumers are empowered to do this
because software is so abstract that important decisions are invisible to
them.
--
Brad Cox; c...@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482
Ah the old "engineers do things in a nice, neat formal manner" canard.
BS. All fields of engineering suffer from similar problems. Note for
instance the comment, "The designer...can...". Great but what about the
maintainer, are the draws obvious to him. I doubt it. In @95% of the
drawings I've seen there are modifcations that do not show on the
drawings. So one has to reverse-engineer what they are. Further, even
the original designer can only understand the drawings "at a glance" if
the project is simple. Fourty pages of detailed drawing with numerous
notes is not something you can scan quickly for correctness. BTW, having
worked closely with both mechanical and electrical engineers and being a
testing engineer myself, I can assure you that much of the work that
goes on in seat-of-the-pants effort. Sure we generate drawings before
building, but unless it's a routine "done a million times before"
project, those initial drawings are nothing more then a jump off point.
Many times the changes and iterations are not reflected in the drawings
until the product is complete and guess what - then it's more
reverse-engineer the drawings than updating them.
Oh, but WE don't do things like that, OUR engineers are true engineers;
you must have worked with shoddy firms. Oh really? Take a good look
around - got any boards where the documentation says "...to set jumper
J12, cut the trace between...". Any boards with green wires sprouting?
Ever install field upgrades that correct defects in the original design?
Ever have your car recalled due to a design flaw? What's the latest
count of NASA's lost launch vehicles?
Richard Neitzel th...@thor.atd.ucar.edu Torren med sitt skjegg
National Center For Atmospheric Research lokkar borni under sole-vegg
Box 3000 Boulder, CO 80307-3000 Gjo'i med sitt shinn
303-497-2057 jagar borni inn.
I think this is a useful view, because it stresses the use of "standard"
components (whatever standard means) and not starting first with iron ore,
but the problem is that a plumbers task is always fundamentally the same.
Carry in fresh, carry out used. The drainage angles may be goofy, the pressure
may have to be high, the building code may specify copper over iron, etc.,
but the task remains constant.
I much prefer the "writing a novel" analogy, because a novel may have
a variety of different purposes. To inform, to entertain, to shock, etc,
all are possible. Further, a novel is constructed much in the same
way software is - by magic :-) :-)
(ok, I know, that's heresy, couldn't resist.)
chris...
--------------
The three rules of plumbing (heard from an ex-plumber friend)
1. Sh*t runs downhill.
2. Never chew your fingernails.
3. Payday's on Friday.
--
Christopher Lott Dept of Comp Sci, Univ of Maryland, College Park, MD 20742
c...@cs.umd.edu 4122 AV Williams Bldg 301-454-8711 <standard disclaimers>
As an example, take user interface design program that allows a user to
graphically build set of windows, and the window items that are contained
in them. Each time the user adds a window item and places it in a specified
window, a relationship between a window item and the window containing it
is established. I would argue that this is just as valid a method of
software design as the design of a class hierarchy, although for some kinds
of problems, it might be impractical.
Pete Welter
Apple Student Rep.
University of Wisconsin-Milwaukee
pe...@csd4.csd.uwm.edu
There is little maintenance with film, but sequels which borrow a great deal
of the original script as well as some film cuts might offer similar
challenges.
Any comments on this analogy?
Warren
==========================================================================
Warren Harrison war...@cs.pdx.edu
Department of Computer Science 503/725-3108
Portland State University
Re: John Dudeck
/the "software crisis", namely that software development has been looked
/at by most as being similar to other forms of engineering, yet somehow
/we just can't seem to get our act together and make it work.
But why should it work? In other forms of engineering, rigid time
estimates are not given. It's just that somehow programming was
somehow slipped into the production department, and when things
can't be done on schedule people get upset.
Programming will never be done on schedule, because you are not
assembling components when you program. You are DEVELOPING new
components and devising a way to assemble them. These components have
no previous history to compare to for an accurate estimate.
The auto industry makes cars every year, so they know about how
long it takes to design a car. Computer programs are not like cars;
each one has it's own unique purpose and needs and these cannot be
classified en masse.
/I think there is one fallacy in the analogy, and that is that software
/development is not finished when the "engineering" is done.
Sorry, I can't see where I implied this in my posting.
Engineering continues until the product is complete. There is no
construction phase, because the program IS the design, and IT
is what is being sold.
/ For one
/thing, the high-level and low-level designs of a product may be done
/concretely, before the actual implementation is done. Also, once the
/programs are written and working, the product is still far from finished,
/since there will need to be further debugging, refinement, testing, and
This is all part of the engineering phase.
A software house is really an engineering house. But somehow
it become thought of as a construction company. The drive is on
profits, profits, profits. So engineering is forgotten because it
is too much of an open-ended expense, and everything is forced
to be on a production schedule.
/It seems that the industry is starting to finally get a handle on the
/nature of software development, realizing that as Cliff points out,
/software engineering isn't like other kinds of engineering.
This is NOT the point I was making. You COMPLETELY missed my point.
Software engineering IS like other forms of engineering, but nobody
treats it that way. This is the problem.
The whole problem is that SOFTWARE HOUSES REFUSE TO TREAT
SOFTWARE LIKE ENGINEERING! They try to treat it like production
and manufacturing, and then they cry when deadlines are missed.
The point is that "deadlines" are inappropriate in engineering, because
the quality of the product suffers (defective cars, TVs, etc.) You don't run
an engineering firm with that mentality. You don't run a software house
with that mentality, but many do. An engineering firm with a profit only
motive will go out of business after all the structures it designs collapse.
Actually, this is what is happening with a lot of software houses these days.
/
/The nature of software development is such that the if we can have many
/rapid feedback cycles...Rapid prototyping ...changes in the requirements
/...concise specification methods... (COCOMO model)...becomes more accurate.
Good comments,
but they have nothing to do with my point. My point is that this process should
be classified as engineering. The estimates won't improve, because
engineering estimates are much more relaxed than production. But we
no longer will be able to criticize bad estimates, which will increase job
satisfaction and the general quality of software by making programmers
"want" to do better work.
/... the problems of software that is expensive,
/unsatisfactory, and that takes a long time to develop will continue to
/plague the world.
/Object-oriented design and programming is an effort to make software
/engineering be more like other kinds of industries, by creating software
/components that are building blocks, and that can be reused without
/having to re-design them for each project.
The process of fitting engineering into production continues. Now
they have devised "software components" to further facilitate this
process. To people who think they will get software "components" that meet
specifications, use a minimum of resources and do not require engineering,
GOOD LUCK.
Who is trying to treat software as engineering and see what benefits accrue?
I predict "software building blocks" to be a dismal failure, unless one is
happy with resource hog programs. There is too much tuning that a craftsman
can do to double and triple performance in certain situations. Adding "black boxes"
to eliminate this process will guarantee mediocre programs. That is, until we have
10000 MIPS on every desktop, then maybe "how well" software works will no
longer be an issue.
---------
Re: Scott McGregor
/ However,
/the complexity of the typical software design is usually much greater
/than the typical construction project because construction materials are more
/standardized than software modules are.
I talked to people who built a 50-floor high-rise. The design effort I saw
makes software efforts I've seen look pretty poor.
/.. line of code level... [same as] describing the depth of
/every groove at each point (external interface) and the precise molecule
/by molecule composition of the material.
Every item is accounted for in the building estimate, while every item
IS NOT accounted for in a software estimate. No wonder software
estimates are wrong so much of the time. But then as Scott points out,
you can't complete the whole program just to get the software
estimate. The software business is a tough one to be in.
/Moreover, because the engineering costs are small
/compared to the overall construction costs, people are willing to pay for
/a "paper study"
This is true. In a big project, the engineering cost is small compared to
MATERIALS. But the accuracy is good because all the components
are known quantities to estimate.
But software has NO materials! The design IS the product, so there is
no construction cost to overshadow the engineering costs.
Software is all engineering. You are never assembling components,
but are devising new components and developing ways to fit them together.
(glue logic, data structures, etc.) When you are done devising the components
you are finished, BECAUSE THAT IS WHAT YOU SELL! The only "production" cost
is version control and making disk copies.
/However, in Cliff's suggested version.....management has to totally
/commit to a project even before they know what it will cost.
/...[this] shuffles the cost prediction problem to someone else it doesn't solve
/it.
My point is that software is not the business to be in if you want the
accurate cost prediction that occurs in the construction industry.
I don't have a solution to this problem, and I don't think there is a solution,
just as there is no solution to accurately estimating engineering cost
in any other field. Is this respect, all my writing on this subject is
"negative" - that is, I am critical of what is being done and I have no
solution except to recommend an "engineering" classification. But seeing
how unsuccessful previous attempts have been at solving the prediction
problem, I think restructuring software as an engineering R&D department
can only have beneficial effects. I realize this is not compatible with the
interests of time-to-market folks, but no guts, no glory.
I would rather see software classified as engineering, with no construction
component, and have all software ventures classified as "engineering
studies". Such studies are not fixed-priced, and the accuracy is equal
to the amount of cash you pump in. If you want an accurate estimate, you
PAY FOR IT. Don't do it breaking the backs of programmers who make the
educated guesses, which is all that can be made unless to program is
coded and finished. Programmers are really getting a raw deal in the
current scenario. That's why so many of them change jobs and are
generally disillusioned with their career.
We would have a lot better software delivered on a more prompt schedule,
and have happier employees. People who feel they are part of something will
do much better than those passed over for promotion because of a bad estimate.
In the end, I believe such an approach would pay off.
/The truth of the matter as to how building contractor's can predict
/schedules better than software engineers is that they have a book
/of expected times to do standard tasks. The tasks and materials
/are pretty standardized and have been done by large numbers of
/people so that reasonable maximum times have been observed.
THIS IS THE CORE OF THE ISSUE!
I submit that this will NEVER occur in software because the components
are necessarily different for each program's unique needs. All
buildings are made of the same pieces. Why should we think that
programs in THOUSANDS of industries will ever share many of
the same components? I think the idea is poppycock.
/It is different in the software world. ..people ... make
/an estimate without any "book" [to] consult.
/ ... tasks...can be more variable.
Exactly what I've said.
/[distinguish] a schedule (i.e. a contract) and an estimate.
[discussion of inclusion of variances in estimates, scheduling, etc.]
Omission of these items surely contributes to erroneous estimates.
/For many software projects time to market is
/extremely important. Early entrants often make significantly more
/money than late entrants.
Try a small core of programmers who own enough stock in the company
that they will make a nice nest egg if the product makes it big. Treat
them as an R&D group rather than a production group with
conventional management. I think this will produce more software
of better quality in less time than conventional approaches. Financial
backers of the company won't like this approach though because
if one person leaves, the value of their investment will be diluted.
/ So there is the natural pressure (mentioned
/above) to look for optimistic estimates.
YES! I find I always have to match the estimate my manager
suggests, because if I don't I'm in trouble. Actually this
makes estimating easy, except that I am still responsible
when the estimate is wrong.
-----
Re: Ralph Johnson
/However, these guys should be reusing software instead of rebuilding
/it.
Try it sometime, it's not as easy as you think.
/After "the industrial revolution" we will start reusing software
/instead of rebuilding it all the time.
Hah. This will never happen, and I'll explain why. Take a date validation function
for example. Date validation is the same everywhere, so you acquire a C library
that does it. When you get it, you find it requires a ascii format that your data
does not have. You don't have the source to the "component", so you have
to restructure your date from (char)10/9/90 to (long int)19901009 for
the function, wasting CPU cycles. You get another library with a function that
accepts an argument that declares the syntax of the date. Now the function
has to internally check this flag for each data, wasting cycles and wasting
space for syntax conversion never used.
(OOP folks, how does OOP or CASE help this problem?)
Some people can't waste the CPU cycles, so they will code it themselves.
Others realize it takes much more money to fit other's components into
their program then developing their own components. Still others learn that
using other's components means that they have black boxes that cannot be
modified, limiting their future choices.
When the day comes that a company goes into business manufacturing
thousands of these items in every possible variation, then we may
have partial component-based software development. But now there
is no money in this because it takes more money to administer component
use than to hire someone to write the component. Also everyone is busy
making software for the retail market where the most customers are
rather than OEMing components.
I predict making software components will do nothing to solve the
problem of estimating software development time. There will still
be custom components to be make, and these will suffer the same
estimation problems.
Re: Discussion on OOP:
I have yet to see how OOP solves any of the problems regarding
estimating time for writing a program. Someone at some point has
to develop a sequence of steps that make each "object" work. The
subjects I'm discussing aren't even relevant to CAD because using
CAD isn't writing a program - CAD is a higher-level abstraction using
an existing set of modules already written.
Re: Plumbing & movie analogies:
These are new ones to me and interesting! But in my posting I'm trying
to stick to basic principles to figure out the "truth" rather than make
comparisons, which can be misleading.
Cliff Heyer
cliff...@cup.portal.com
=======================================================================
Imagine the army (software engineering) and the navy (computer science) arguing
about the nature of swamp. The army says its a lot like the land (concrete),
and the navy says its like sea (abstract). Both are partially right, and
both are partially wrong, because the swamp is too thick to drink but too
thin to plow.
Software is a swamp that is too thin for engineering analogies and too thick
for mathematical ones. Instantiation, inheritance, and static binding
are part of software's mathematical heritage, whereas reuse, encapsulation,
and dynamic binding are from its engineering heritage. Whichever analogy you
choose as 'right' is sure to be wrong somewhere, just as land and sea are
only first approximations to swamp.
The fact that OO brought all of these disparate elements together, even though
the mix about as well as oil and water, is part of the reason it suddenly
became so popular. Both army and navy could find something there to like.
But the fact that the different factions see totally different things to
like there is also responsible for everybody else's confusion about what
OO really *means*.
cliff...@cup.portal.com (Cliff C Heyer) writes:
>Re: Ralph Johnson
>
>/However, these guys should be reusing software instead of rebuilding
>/it.
>Try it sometime, it's not as easy as you think.
>/After "the industrial revolution" we will start reusing software
>/instead of rebuilding it all the time.
>Hah. This will never happen, and I'll explain why. Take a date validation function
>for example. Date validation is the same everywhere, so you acquire a C library
>that does it. When you get it, you find it requires a ascii format that your data
>does not have. You don't have the source to the "component", so you have
>to restructure your date from (char)10/9/90 to (long int)19901009 for
>the function, wasting CPU cycles. You get another library with a function that
>accepts an argument that declares the syntax of the date. Now the function
>has to internally check this flag for each data, wasting cycles and wasting
>space for syntax conversion never used.
Here is where Object-oriented design comes in. Instead of a date validation
function, what you would have is an object called date, and one of the things
that it can do is maintain its validity. You don't have to have a date
validation function, because a date object can never be anything but a date.
As an application programmer, you don't know or care what form the date
is represented in. All you know is that there is an instance of a date.
You can ask it to display itself, pass itself to you for you to use in
your program, etc.
>Some people can't waste the CPU cycles, so they will code it themselves.
>Others realize it takes much more money to fit other's components into
>their program then developing their own components. Still others learn that
>using other's components means that they have black boxes that cannot be
>modified, limiting their future choices.
>
>When the day comes that a company goes into business manufacturing
>thousands of these items in every possible variation, then we may
>have partial component-based software development. But now there
>is no money in this because it takes more money to administer component
>use than to hire someone to write the component. Also everyone is busy
>making software for the retail market where the most customers are
>rather than OEMing components.
I believe that there will be a move to commercial component libraries,
but they will be object-oriented and not function-oriented.
>I predict making software components will do nothing to solve the
>problem of estimating software development time. There will still
>be custom components to be make, and these will suffer the same
>estimation problems.
Well, at least the problem will be simplified. Think of what happens
in electronic circuit design. The design engineer has databooks full
of chips and other components that he can draw upon. At some point, it
may be cost-effective to build an application-specific chip. But how
does he do this? He doesn't have to design his chip at the transistor
level, because he has a CAE system that allows him to assemble pre-designed
logic cells, which perform higher-level functions and have specified
interfaces.
In the future, I believe a large percentage of software development will
be done by assembly from pre-developed class libraries. And the parts
that are not already in a library can be built up from sub-components
and added to the library for next time.
>I have yet to see how OOP solves any of the problems regarding
>estimating time for writing a program. Someone at some point has
>to develop a sequence of steps that make each "object" work. The
>subjects I'm discussing aren't even relevant to CAD because using
>CAD isn't writing a program - CAD is a higher-level abstraction using
>an existing set of modules already written.
OOP at least improves the situation to the extent that the object classes
that are already written don't require any development time in order to
reuse them. The issue is not whether it helps you estimate the time needed
to develop the parts that have to be done from scratch...just that you
don't have as much to do from scratch compared to a non-object-oriented
project. The effort spent on developing well-designed class libraries
is appreciable, but it is all the more appreciated when you don't have
to re-expend it on every project.
I have been one who has talked about the crasftsman mentality, how that
if something is worth doing it is worth doing well. But that does not
mean that every time I go to write a program I will write every line of
code from scratch. If there is a way that I can use well-made components,
so much the better. Then I can spend my energy working on the real design
issues of the project.
I agree that programming as the term is usually defined today, as a
gate (expression) or block (subroutine) level activity, is certainly
more like novel-writing than plumbing.
I was referring to when chip (instance/message), card (task/stream)
and rack (process/file) level reusability comes fully online.
Programming at these higher levels of abstraction, particularly at
the card and rack level, should be *precisely* like plumbing.
In hardware engineering today, notice that those who work with card-level
modularity would agree that their task is "fundamentally always the same".
And that's precisely what they want, and why they choose to stay at
that level of modularity. People decide to work at higher levels
precisely because they *want* it to be always the same; no muss,
no fuss, plug it in and it *works*.
Note that the rack-level programmers of today (Unix shell) even speak
of their modularity/binding technology as pipes and filters.
This is a perfect example of why scientists get bashed for not being
practical.
You are interfacing with any number of applications and hardware platforms.
They DO NOT have a standard way to represent a date.
Here is a list: 19900101, 1/1/90, 01/01/90, 01/01/1990,1/1/1990,
Jan 1, 1990, January 1 1990, January 1, 1990, Jan-01-90, JAN-1-90,
90.1.1, 900101,0110100101110101000101001010101 (binary UDT
(universal date & time), BCD (binary coded decimal - do you know what
that is?), plus dozens more.
I submit OOP to have NO BENEFIT in terms of development time to the writing
of a program to reduce the above to a common date. The resulting
code may be easier to maintain, and more reusable. But the real problem
today is time estimation; that is what is bankrupting companies and
departments.
These programs are a pain in the a** to write, and they will always be no
mater what new programming fad come along.
>As an application programmer, you don't know or care what form the date
>is represented in.
Most programs interface with the external world, not just an internal one.
What you speak of is fine for engineering an encapsulated product. Even so,
YOU STILL HAVE TO SELECT A DATE FORMAT FOR SCREEN DISPLAY, so
your "object" must be able to display it. Maybe the boss wants 10/9/90
but the object was written in Fiji and has the 90.10.9 format.
Somebody has to make that change, it's not done by magic.
>You can ask it to display itself, pass itself to you for you to use in
>your program, etc.
This is nice, I'm not trying to argue with this.
>>I predict making software components [OOP] will do nothing to solve the
>>problem of estimating software development time.
>Well, at least the problem will be simplified.
But nothing to change the fundamental problem: When you are programming
you are not assembling pre-engineered components. You are developing
an new sequence of steps to perform a task. This is not and will never be
accurately quantifiable. (I'm just waiting for some manager to give me
his logic why this is wrong...and post it)
All OOP does is move the problem up to a higher level of abstraction.
Instead of having the problem with individual lines of C code, you now have
the problem with a sequence of "components."
A computer does things sequentially, that is that. At some point you have to get
down and dirty and specify the sequential steps. If you hide them inside
"objects" or "foobars" it does not matter, the sequential steps still have
to be there, and someone has to estimate them and program them.
I think focusing on making the workplace a happier place by looking at
psychological issues, etc. would prove more fruitful for productivity than
OOP ever will. But participation in the psychology USENET newsgroup is
at an all-time low.
>Think of what happens
>in electronic circuit design. The design engineer has databooks full
>of chips and other components that he can draw upon. At some point, it
>may be cost-effective to build an application-specific chip. But how
>does he do this? He doesn't have to design his chip at the transistor
>level, because he has a CAE system that allows him to assemble pre-designed
>logic cells, which perform higher-level functions and have specified
>interfaces.
But the scenario you describe has far more limited choices than
does a software programmer. You have an infinite number of different
functions and arguments you could devise to perform a task. You have to
do research to find the right ones.
With the electronic circuit, you have a finite number of components to
select from to do the job, because you can't "build your own" on the fly
like you can with software.
In addition, unless you have a microprocessor, you have no algorithms
running inside the chips.
With objects, you may have to tune that internal algorithm for a specific
task to save CPU time. Oh, but wait - you can't do this because this
would violate the fundamental principle of objects. Why should you not be
able to take advantage of a flexibility that software affords over hardware???
>But that does not
>mean that every time I go to write a program I will write every line of
>code from scratch. If there is a way that I can use well-made components,
>so much the better. Then I can spend my energy working on the real design
>issues of the project.
I agree, but my experience with the "real world" makes me suggest this
to be impractical.
Cliff Heyer
===========================================================================
I have jumped into this discussion because I have just finished taking a
course in software engineering, and in the course we spent some time talking
about just these topics. I hope Cliff doesn't think I am flaming him :^)
cliff...@cup.portal.com (Cliff C Heyer) writes:
>>Here is where Object-oriented design comes in. Instead of a date validation
>>function, what you would have is an object called date, and one of the things
>>that it can do is maintain its validity. You don't have to have a date
>>validation function, because a date object can never be anything but a date.
>I know, but this does not solve the above problem. YOU STILL HAVE TO PUT
>DATA IN THE OBJECT! YOU *MUST* VALIDATE IT IF IT COMES FROM AN
>OUTSIDE SOURCE!
Ok, I admit that when you are interfacing to the outside world, you still
need to validate your data.
Part of the issue here is to what degree your outside world is outside the
domain of the classes that you have to work with. The goal is to have more
of the problem domain already available as components, so that you don't
have the situation where your data is coming from outside the system.
>I submit OOP to have NO BENEFIT in terms of development time to the writing
>of a program to reduce the above to a common date.
Well, of course if you have to write this function, you have to. The
goal would be to not have to.
>The resulting
>code may be easier to maintain, and more reusable. But the real problem
>today is time estimation; that is what is bankrupting companies and
>departments.
>
>But nothing to change the fundamental problem: When you are programming
>you are not assembling pre-engineered components. You are developing
>an new sequence of steps to perform a task. This is not and will never be
>accurately quantifiable. (I'm just waiting for some manager to give me
>his logic why this is wrong...and post it)
I would like to see some managerial-level input on this topic, too!
>All OOP does is move the problem up to a higher level of abstraction.
>Instead of having the problem with individual lines of C code, you now have
>the problem with a sequence of "components."
Yes, it DOES move problems to a higher level of abstraction!!!! That is
exactly the point. Of course you can say that C is nothing more than a
higher level abstraction of assembly language. But in going to a higher
level, it also becomes more powerful.
>A computer does things sequentially, that is that. At some point you have to get
>down and dirty and specify the sequential steps. If you hide them inside
>"objects" or "foobars" it does not matter, the sequential steps still have
>to be there, and someone has to estimate them and program them.
A lot of what is gained in OOD is in the de-sequentializing of our problems.
Objects can be considered as independently executing processes. Because of
the extreme decoupling between objects, the problem is broken down into
more manageable pieces. This seems to me that it should make the problem
of time estimating easier. I'm not experienced in this yet. All projects
that I have worked on were not object-oriented designs.
>
>I think focusing on making the workplace a happier place by looking at
>psychological issues, etc. would prove more fruitful for productivity than
>OOP ever will. But participation in the psychology USENET newsgroup is
>at an all-time low.
Well, to some extent at least, the shift from functional design to
object-oriented design is mainly a psychological one... for the
programmer, that is :^)
>With objects, you may have to tune that internal algorithm for a specific
>task to save CPU time. Oh, but wait - you can't do this because this
>would violate the fundamental principle of objects. Why should you not be
>able to take advantage of a flexibility that software affords over hardware???
The idea is that you shouldn't have to be concerned unless you want to be,
about such aspects as algorithm fine-tuning. Certainly I wouldn't advocate
that algorithms shouldn't be fine-tuned to an application. I would say
that if you can show that an algorithm used in an object in your application
is the cause of performance that does not meet your requirements, then you
should find the sourcecode of the object and fine tune it, or else otherwise
come to a solution.
>>But that does not
>>mean that every time I go to write a program I will write every line of
>>code from scratch. If there is a way that I can use well-made components,
>>so much the better. Then I can spend my energy working on the real design
>>issues of the project.
>I agree, but my experience with the "real world" makes me suggest this
>to be impractical.
You're probably right for now. Whether or not this continues to be
impractical in the future will depend on how well the technology
catches on.
I've been urgently searching for a historical source to back up my speculation
that the cottage-industry gunsmiths, upon hearing of this new-fangled Armory
Practice, would have sounded a lot like us. "Goodness gracious, think
of the trouble and expense it will take to make interchangeable parts. It
will never work. It will be too expensive".
The only historical datum I've found thus far to back this up is that
armory and congressional records showed that really *was* far more
expensive to build guns the new way. But only the producers seemed
to care; the consumers didn't really care (much) because their priority
had become easier repairability.
Is anyone aware of data to support, or shoot down, this speculation? Data
for either guns *or* software would do nicely.
It's worse. Programming frequently requires using unreliable and unverified
components which are replaced just as soon as you figure out fixes (if you
have the source) and workarounds. The new (outside) components are just as
unreliable and unverified, but require different workarounds and fixes,
as well as a different set of primitives. The original schedule was created
under the assumption of a solid foundation of hardware and subcomponents.
The reality is that one is building on shifting sands.
I highly doubt that any software can be built on schedule until both hardware
and software are verified to conform to specifications of appreciable vintage.
Unless, of course, the schedule allows for an indefinite number of periods
of indefinite length to be spent on porting.
+-----------------------------+--------------------------------------------+
| | Carl Klapper |
| | Odyssey Research Associates, Inc. |
| Verify, then trust. | 301A Harris B. Dates Drive |
| | Ithaca, NY 14850 |
| | (607) 277-2020 |
| | klapper%orava...@cu-arpa.cs.cornell.edu |
+-----------------------------+--------------------------------------------+
Notice that a subtle distinction occurs here. There is a difference
between how a field is traditionally practiced and how computers affect
this tradition. When I say that we apply software to ill-understood
domains, I am saying that computers change a field such that we don't
understand it as well anymore. For example, accounting practices have
been fairly well known, but applying computers to accounting poses many
new problems and opportunities. The spreadsheet has both alleviated
many problems and changed accounting practices. Of course, there are
also fields that were never conceived of before computers - like
programming (a case of the tail wagging the dog?).
What all of this means for this discussion is that we won't have a
software engineering methodology as scientific as bridge building until
we have the necessary scientific methods for each domain we wish to
apply computers to. And since the domains are ever evolving and
appearing, always pushing the limits of what we can do with the
computer, the production line vision of general computer applications is
nothing more than a dream.
-- Scott
sco...@boulder.colorado.edu
There have been several studies that show that most tasks are highly
repetitive. Thus, most systems do not have to be newly developed; they
can be assembled from reuseable components. The issue comes down to
economics. What is the cost of cataloging (i.e., identifying the
important attributes) components to be reused, and later identifying
them for reuse with or without modification? Once this is known, then
the overall cost of reuse can be compared to the costs of new development.
Various mathematical software libraries are successful because
components are reliable and easily identifiable. This same success can
be had in other domains too! Someone just has to buy into the high
start-up costs in the belief that the long-term benefits are there.
Unfortunately, the procurement process of American society does not allow this.
> script writer is like a software designer/ producer is like a product
> manager/ director is like the project manager / actors are like programmers
> (I like that part :-) / continuity is like QA
>
> There is little maintenance with film, but sequels which borrow a great deal
> of the original script as well as some film cuts might offer similar
> challenges.
>
> Any comments on this analogy?
I also like the analogy to movie production and use it frequently myself.
However, I believe that we are doing software the way movies were done
before Griffith. I do not think that the director role is like the
project manager--I think that it is like the designer, particularly
the designer of the external interface. Programmers are not like actors,
Programmers are like camera men. In the days before Griffith it was
common that the director (i.e. the person with a vision about what the
audience would want to see) was the same person as the camera man. who
knew how to operate the camera and make the right lighting exposures. After
Griffith, these largely became separated and it became less necessary to
be a master technician of camera equipment in order to create a good movie.
In programming today, to a large degree with have not made this division
of labor between those people who want to be technically intimate with the
machines (i.e. the programmers) and those people who want to be intimately
knowledgable about what will appeal to their audiences (the designers).
We typically have only product managers who give overall design specs
(i.e "MS Word and WriteNow were box-office boffo! Lets do something
like that too!")." But detail control of the user interface interactions
is typically still done by those who are trying to squeeze efficiency out
of the machine. I believe that the state of the software interfaces also
is comparable in quality (in terms of meeting what users wanted) to the
pre-Griffith style of movie making art.
Note that I don't mean by this that programming is not a difficult and
valuable enterprise. On the contrary. I think it is quite valuable,
too valuable in fact to be left to diletante directors. I don't particularly
mean to celebrate directing and user interface design over and above
programming. I think both are important and that the best movies are
the result of good camera techniques, cinematography AND direction, just
as great software usually reflects both attention to the external
requirements and desires of the consumers AND to internal efficiency
concerns. Some people make better directors and some better camera
operators. I think similarly that some people make better designers
and some make better implementors. One test I sometimes use to
see where people fall on this spectrum is to ask where the most
enjoyment from programming comes from. Those who respond that they
like creating their own more logical world are often better in the
implementation arena. Those who say they are turned on by trying to
figure out what users really want when they don't explain themselves
eloquently are frequently more comfortable in design. At present,
in many jobs these aren't separated--for some people and products
I think we would see improvements if such a separation occurred.
Scott McGregor
mcgr...@atherton.com
> The whole problem is that SOFTWARE HOUSES REFUSE TO TREAT
> SOFTWARE LIKE ENGINEERING! They try to treat it like production
> and manufacturing, and then they cry when deadlines are missed.
While I agree that software houses refuse to treat software
like engineering, I do not believe that this is why they cry
when deadlines are missed. Even other engineering businesses have
deadlines and schedules. Accuracy of prediction varies across
disciplines and even companies. But people still have them.
Companies "cry" when people miss deadlines because the economic
consequences of missing them can be threatening to either the
individuals or to the organization as a whole. This is as true
in engineering as in manufacturing and is a psychological and economic
factor, not a scientific or engineering factor.
> The point is that "deadlines" are inappropriate in engineering, because
> the quality of the product suffers (defective cars, TVs, etc.) You don't run
> an engineering firm with that mentality. You don't run a software house
> with that mentality, but many do. An engineering firm with a profit only
> motive will go out of business after all the structures it designs collapse.
> Actually, this is what is happening with a lot of software houses
these days.
Actually, since much of engineering IS about tradeoffs it is entirely
appropriate to consider deadlines in development. Building an elegant
product to 80% completion and then abandoning it due to lack of money,
or because another company has already nailed down the market is not
good just because the quality of the implementation that was completed
was high. Success comes only if you survive long enough to deliver
your products. Money, manpower and time not being unlimited, you have
to meet some constraints to ensure success. Within those constraints
it is entirely appropriate to concern oneself about product quality.
I also don't doubt that some people paint the constraints as being
tighter than they really are, thereby making inappropriate decisions
about product quality. But that doesn't mean that one should
abandon managing to constraints--it just means you should get more
realistic managers. I am loath to make any blanket claims about
any class, but in general, many engineering firms go out of business
not from too much attention to cash flow and profits, but from too little
attention to short term finances to ensure their day to day survival
long enough to reach the long term.
> I predict "software building blocks" to be a dismal failure, unless one is
> happy with resource hog programs. There is too much tuning that a craftsman
> can do to double and triple performance in certain situations. Adding
"black boxes"
> to eliminate this process will guarantee mediocre programs. That is,
until we have
> 10000 MIPS on every desktop, then maybe "how well" software works will no
> longer be an issue.
Note that this is already happening. Some machines are getting fast
enough that people are accepting resource hog programs that are easy
to build and maintain even though more efficient systems are possible.
The most obvious cases are 4GLs and spreadsheets that are coded by
non-programmers. Higher efficiency programs (in terms of transactions
per second, etc.) are possible by trained programmers in languages
like assembler and C. But these programs are already "fast enough" for
their users, and programmers are not consulted.
Additionally, more and more programs are used "off the shelf". Once
upon a time almost every company's payroll, Accounts Payable and
Accounts Receivables were done with specialized programs suited
especially for that company. While this is still true for many
large or older companies, most small or new companies now either
purchase canned software packages of this sort or subscribe to banking
services that use one set of these programs for all their clients.
> /.. line of code level... [same as] describing the depth of
> /every groove at each point (external interface) and the precise molecule
> /by molecule composition of the material.
> Every item is accounted for in the building estimate, while every item
> IS NOT accounted for in a software estimate. No wonder software
> estimates are wrong so much of the time. But then as Scott points out,
> you can't complete the whole program just to get the software
> estimate. The software business is a tough one to be in.
My point is that a succinct description is sufficient to describe
an item in a building estimate, but that without standardized components
this is not possible in software. If the precise threading depths and
widths and nut and bolt lenghts were not standardized, you would have
to write a paragraph or two about each one used in the building
describing not only these depths and widths and lengths but the
tensile strengths of the materials, their thermal properties, etc.
If this was done for building, instead of relying on standardized components
for which these paragraphs are already written and well-known then
detailed design costs would dominate construction costs too, and
probably fewer buildings would be done with so many plans (in fact,
back when timbers were custom cut and nails were hand made there was
less design plans done. Even huge cathedrals were often designed "on
the fly" over years and years with different parts of the buildings
done differently by different craftsmen.
Now some building projects have really immense planning phases that
dwarf the complexity of some software projects. However, for those
complex building projects we find that the rest of the costs of the
project are giant too. I'll bet that if you compare like size building
projects to like size software projects (say like being in total man
hours to completion of the entire project) you'll find that software
design phase consumes a larger amount of the relative budget than
the building project does. Increasing this to even larger amount
spent on the project to ensure even more precision is often uneconomic
for software.
> /Moreover, because the engineering costs are small
> /compared to the overall construction costs, people are willing to pay for
> /a "paper study"
> This is true. In a big project, the engineering cost is small compared to
> MATERIALS. But the accuracy is good because all the components
> are known quantities to estimate.
Yes. Now if you are a little less specific, you'll save more on the
up front engineering cost, but at increased variability of the production
phase. At some point you reach an individual's or organization's balance
for predictability vs. cost (the flip side of risk vs. reward). That's
what determines what the actual tradeoff is. Unfortunately, largely
because of the lack of software components, the specification process
is very expensive. So many people trade that off considerable design
specificity to achieve lower costs. Granted that comes at some cost
in predictability as well. People would like both increased predictability
and lower cost. But sometimes the cost factor is the gating factor
since if it costs more you might go out of business or at least
cancel that investment. In general in the software business today
these tradeoffs tend to be toward lower up front costs at the cost
of poor predictability. Apparently this common tradeoff is a reasonable
one, since companies that do more complete specs are not by and large
dominating the industries profits yet.
> /However, in Cliff's suggested version.....management has to totally
> /commit to a project even before they know what it will cost.
> /...[this] shuffles the cost prediction problem to someone else it
doesn't solve
> /it.
> My point is that software is not the business to be in if you want the
> accurate cost prediction that occurs in the construction industry.
I agree. And I don't think that people are in it for that reason. I
think that they are in it because there is some money to be made there
and because they think they have the special skills to be successful
there. I believe this is true at both the macro and micro levels.
Even so, I know that people have different tolerance levels for accuracy
and predictability. So within any given organization there is some
tension about the precise level of tradeoffs. It is by no means always
this way, but I have frequently found that many of the programmers have
a lower tolerance for ambiguity than their managers. So while a manager
WANTS the same low level of unpredicitability as their engineers, and they
may REWARD the more predictable or favorable outcomes, they will be less
tense personally about possible variance in the predictions than the
engineers. Now this reward structure may tend to aggravate the self-induced
tension that engineers have about lack of precision in their estimates, and
I suspect that this psychological fact may be the real root of this discussion.
> I would rather see software classified as engineering, with no construction
> component, and have all software ventures classified as "engineering
> studies". Such studies are not fixed-priced, and the accuracy is equal
> to the amount of cash you pump in. If you want an accurate estimate, you
> PAY FOR IT. Don't do it breaking the backs of programmers who make the
> educated guesses, which is all that can be made unless to program is
> coded and finished. Programmers are really getting a raw deal in the
> current scenario. That's why so many of them change jobs and are
> generally disillusioned with their career.
I truly believe that no semantic change in whether we classify the work
as engineering or construction will make a difference in this matter.
As I noted above. The desire for price control and predictability is
a basic economic need for survival of the individuals (managers) and
organizations in charge. It is also a basic psychological need of
the individuals for some sort of stability, purpose, and feeling of
control over their future. I do not believe that you can make these
psychological needs go away by a change of classification or terminology.
I do believe that Cliff is correct that if you want an accurate estimate you
have to pay for it. I believe that any competent manager who has done this
for a while will recognize that this is necessarily true. But some (and
I include myself) will tolerate some amount of unpredictability ("I just
want a 'ballpark' figure") in order to save some cost. We don't require
absolute accuracy--close is fine. Closer is better, as long as it doesn't
cost much. If it costs too much I settle for less accuracy. I try to
be humane about this, and let my people know that I understand the level
of variance implied (and I take care to consider that when I have to
make committments myself). However, I have frequently found that
my engineers are personally less willing accept the same level of
ambiguity as I am. As I say, I try to be humane about this, but
sometimes things do come down to personal differences, I hope that
my engineers in the past haven't felt ill-served by this--perahps they
will teply and tell me different. I do think that there is one other
exacerbating problem, and that is that the ability to predict seems
to be in part determined by years of experience. Some people have
had many years but don't seem to have been able to convert any of
that experience to good use, but there are a many experienced people
who give MUCH BETTER estimates and predictions than others. I rarely
find inexperienced people who seem to have a talent for good estimates
or predictions. This is unfortunate for our profession at present,
because so many of the line manager slots are filled with people with
only a few years of management experience and often less than ten
years programming experience. And so lots of engineers pay for this
poor experience of their management, leading to disillusionment, etc.
> /The truth of the matter as to how building contractor's can predict
> /schedules better than software engineers is that they have a book
> /of expected times to do standard tasks. The tasks and materials
> /are pretty standardized and have been done by large numbers of
> /people so that reasonable maximum times have been observed.
> THIS IS THE CORE OF THE ISSUE!
> I submit that this will NEVER occur in software because the components
> are necessarily different for each program's unique needs. All
> buildings are made of the same pieces. Why should we think that
> programs in THOUSANDS of industries will ever share many of
> the same components? I think the idea is poppycock.
Personally, I think that the jury is out on this one. There are still
some "custom craft" tasks that don't show up in contractor's books
and which are subject to more variation and risk. I think we will
always have some custom craft work as Cliff points out. But it is not
clear to me that we won't have a growth in components. In some ways
I think that we already have, but that you see the benefits most
strongly in the end-user systems and applications program areas.
I've already mentioned the re-use of accounting software, of spreadsheet
macros, of 4GL libraries by end users. We are also seeing standard
libraries like Xtk, Motif, even the unix programming libraries that
people are adopting and reusing where before they would have written
their own window systems, etc. before. Now I (re)used the Xtk widgets
a lot, and to the extent I did, I didn't have to worry about variance
estimating how long those functions would take to build. Now, there
was a case where I needed a widget that wasn't in Xtk, so I had to
build it from scratch. It was harder to estimate. That's life.
>
> /It is different in the software world. ..people ... make
> /an estimate without any "book" [to] consult.
> / ... tasks...can be more variable.
> Exactly what I've said.
>
> /[distinguish] a schedule (i.e. a contract) and an estimate.
> [discussion of inclusion of variances in estimates, scheduling, etc.]
> Omission of these items surely contributes to erroneous estimates.
>
> /For many software projects time to market is
> /extremely important. Early entrants often make significantly more
> /money than late entrants.
> Try a small core of programmers who own enough stock in the company
> that they will make a nice nest egg if the product makes it big. Treat
> them as an R&D group rather than a production group with
> conventional management. I think this will produce more software
> of better quality in less time than conventional approaches. Financial
> backers of the company won't like this approach though because
> if one person leaves, the value of their investment will be diluted.
I think that in many cases if you ran this experiment you would see
the benefits you claim. But I would attribute it to the fact that you
had selected a SMALL group and made the individuals more directly responsible.
Because of their economic incentive I would guess that some would take
a more personal interest in what the customer wanted too. And those things
can lead to more successful products. But if no one paid attention to
the costs I would not be surprised if several of these groups ran out
of money before completing their projects. And if the host organizations
ongoing success was dependent on these projects I would expect some of
the host organizations to collapse too.
> / So there is the natural pressure (mentioned
> /above) to look for optimistic estimates.
> YES! I find I always have to match the estimate my manager
> suggests, because if I don't I'm in trouble. Actually this
> makes estimating easy, except that I am still responsible
> when the estimate is wrong.
Well, I can understand that. But if you also say "no guts, no glory",
what does this imply about the need for "backbone" in setting estimates
and schedules that are reasonable? Doesn't "guts" mean taking an
unpopular stand sometimes in the hopes that long term rewards will
pay off?
> Re: Discussion on OOP:
> I have yet to see how OOP solves any of the problems regarding
> estimating time for writing a program. Someone at some point has
> to develop a sequence of steps that make each "object" work. The
> subjects I'm discussing aren't even relevant to CAD because using
> CAD isn't writing a program - CAD is a higher-level abstraction using
> an existing set of modules already written.
>
The way OOP helps is that it makes the components more standardized
and higher level. If you can live with the performance and flexibility
trade-offs, OOP means that you will have fewer primitives that you
will have to specify. Fewer, more standardized primitives means less
detailed and less costly designs relative to the final products. It
changes the cost of the paper study relative to the variability in
the final product, by reducing the cost of the paper study on a
per unit basis. This doesn't mean that specification isn't done,
it means that instead of having to specify a page of code you say
"displays file name in an XwstaticWidgetClass widget" and you are
just as unambiguous. That's less to write, less to read, less
possibilities for error in specification, which add up to a less
costly specification.
Scott McGregor
mcgr...@atherton.com
> But the scenario you describe has far more limited choices than
> does a software programmer. You have an infinite number of different
> functions and arguments you could devise to perform a task. You have to
> do research to find the right ones.
>
> With the electronic circuit, you have a finite number of components to
> select from to do the job, because you can't "build your own" on the fly
> like you can with software.
Ah, but you can, you can pay an ASIC shop to do this for you. The difference
is that the percieved cost in starting down this path to nonstandard
solutions is high. People usually do it intentionally for "proprietary
advantage" (my system is faster, more robust, more reliable....)
But in the eyes of many programmers the cost of starting down this
path is low, so they don't see the benefit. It may be that the
benefit is recognized at a different level, namely at the level
of the end user. To the end user, the ability to use and off the
shelf component instead of having to call a programmer and get
into the MIS backlog queue may be just the same level of cost
that people perceive in going to an ASIC. If true this has
considerable impact on the software industry at large, though
programmers may not feel affected for quite a while.
Scott McGregor
-ed
P.S. A usual Disclaimer applies here.
In a course I took on history of american technology, we covered the
fledgling gun industry. When the idea of interchangeable parts was
first pursued, it was much more expensive to build guns that way, than
the old, because the precision machining required had to be done by
hand. It took the industry about 30 years (a remarkably short period
of time for the era) to develop the technology necessary to make
interchangeable parts on a large scale.
Actually, it was the consumer who demanded interchangeable parts for
guns. Eli Whitney convinced the bigwigs in Washington that it was
a worthwhile goal. He argued that being able to construct working
weapons out of a bunch of damaged guns was a valuable asset on a
battlefield. The gov't agreed and poured lots of money into achieving
that goal. The producers didn't really have a choice in the matter.
If they wanted the contracts, they had to join in.
One interesting note is the environment in which the technology
developed. At the time, the gov't required that contractors could
not hold back new technology from them. As a result, the national
armories were kept up to date on the latest techniques, and became
a focal point for development. Any gun company that wanted to find
out the latest breakthroughs only had to send a representative to
one of the national armories.
If people want references, I can dig them up.
--
Trip Martin
ni...@pawl.rpi.edu
...
>Various mathematical software libraries are successful because
>components are reliable and easily identifiable. This same success can
>be had in other domains too! Someone just has to buy into the high
>start-up costs in the belief that the long-term benefits are there.
>Unfortunately, the procurement process of American society does not allow this
It should be noted that mathematicians spent several hundred wall clock years
and countless man-years exploring the domains in which these libraries work.
This produced a relatively small yet powerful conceptual basis for these
domains together with an elegant notation for the concepts. The notation
was validated in innumerable applications, and the theory was elaborated
in great detail. Most importantly perhaps, the notation (and the most useful
part of the theory) was taught to most engineers and scientists. When
creating these libraries, programmers didn't have to discover the utility
of log, arctan, eigenvector, etc. Moreover, prospective users already
know the terminology used to identify and describe components in statistical
or linear algebra packages, for example.
The lack of a well developed and widely known formal basis for many other
computing domains may well prove to be a serious impediment to the development
of reusable software.
/Bill
For people who have done software development on one operating system,
and one language, at one company for a reasonable length of time, I think
it is obvious that people *DO* REUSE software. The software that they
reuse is the software that they have already written in the past. Sometimes
they can reuse who subroutines unchanged, but sometimes they grab blocks
of code and modify it a little. The problem is that people don't reuse
other people's code as easily. I do not believe that this is totally
due to economic reasons (though those clearly play a part). Rather I
believe that the most important reasons that reuse of other people's
code fails is that it goes against some of the psychological rewards
that programmers want. I actually did see a major reuse situation,
over ten years ago with Fortran libraries. The situation was totally
accidental in getting strarted and was extremely dependent on the
personality of the software librarian (who was a reference librarian
by training and interest, not a programmer), and some of the physical
features of the layout of the building where the programmers worked
and where the librarian worked. The librarian's personality was a factor
as well. Later, this job was overhauled and programmers given direct access
to the library and library tools and the level of reuse quickly fell back
to its original level. I've discussed this situation here before, so
I won't go into more depth unless there are requests, but the point is
not to overlook the power of personal motivations in situations concerning
reuse.
> Various mathematical software libraries are successful because
> components are reliable and easily identifiable. This same success can
> be had in other domains too! Someone just has to buy into the high
> start-up costs in the belief that the long-term benefits are there.
> Unfortunately, the procurement process of American society does not
allow > > > this.
Also, many of these libaries cover functions that are simple, easily
categorized, and well-known. A few sentances (often just the name)
are sufficient to tell what the function does. This means that the
cost of learning these libraries in terms of programmer's time and
interest is low compared to the cost of using less regularized
libraries with more special purpose routines.
The importance of human psychology on this problem is little appreciated,
but is greater than we may think. I believe that we have a lot to learn
from the people who are trying to understand the general principles behind
acceptance of groupware systems (of which shared libraries of reusable
components could be an example).
Scott McGregor
mcgr...@atherton.com
>Actually I've always thought the best analogy for software development
>is movie production. It shares many similarities.
One of which is that some is B-grade and some is art. Unfortunately,
what determines the success of a program or a movie is not based solely
on its internal (the code) and external (the interface) aesthetic appeal.
They are vulnerable to the degrading effects of preoccupation with the
bottom line.
If software development is like movie making, I have had the thought
that programs are exactly opposite to Hollywood sets. A set has a
flashy, majestic appearance from the front, but from behind there is
nothing substantial. In a program, few have any idea of the subtle
and magnificent webs of intricate interactions that lie between the
choreography of twirling electrons in its wires to the parade of
photons meeting our eyes. What complexities lie in a program that
is mistaken by the user to be lying dormant! A program is an
inside-out set. The only similarity between the two is that the
viewer is blissfully unaware of some astonishing aspect.
Of course, the ideal programs of the future _will_ be more like a movie
sets, so that my standard of Excellence in Software can be more
readily achieved:
When the users say "there's more?!" and the programmers say "that's it?!"
Unfortuantely, I've seen many "programs" (perhaps they could be called
"systems", but to use either word is to defame those professionals who
are true programers and system-builders) which remind me exactly of
Hollywood sets. These things are "database applications" written,
without proper design, for PCs.
What has happened is that a variety of tools, such as dBASE and Paradox,
have nice "interface builders" which, in the wrong hands, become mere
"facade builders." People see menus, multi-colored displays, and
whiz-bang functions keys and perceive an effective system, even though
there may be "nothing substantial" behind.
It used to be the case that printing on 11 by 14 edge-punched paper
had authority because it was computer output. Now that same mystique
has moved to the input side too.
--
Edward Robertson robe...@cs.indiana.edu
Computer Science Dept
Indiana University 812-855-4954
Bloomington, IN 47405-4101
My shop has a good repetoire of C routines, and we generally use them.
In some respects, we assembled the software from off the shelf components.
Half the time I've spent on this project ( 1 of 5 programmers spending
28 weeks ) has been learning the switch. The switch is essentially a
single purpose computer with 1.8 Megs. of executable written over a
15 year period. Our Technology group produced dozens of conflicting input
and output formats for switch commands, the documentation of which
encompasses 5 binders and 4,500 pages.
15 years of incremental programming produced many inconsistencies, both in
software and documentation.
The up shot, adapting our product to the switch took a long time not
because we couldn't assemble code from reliable building blocks, but
because we had to learn and relearn the task.
Will
"Because I care...." Dr. Moreau
I saw your note about PC database systems badly "designed" .......
I've been teaching an IS course which includes a segment on
database design, using paradox. (I also use Paradox for a variety
of small systems on a research porject I direct). Paradox has a
neat interface, but is awful in porviding suppport for porgram
development & documentation (e.g., lacks a good interactive editor
to facilitate documentation, and tools to locate x-refed variables,
etc.) I've talked with deisgners of mincomputer relational
products like INgress, and they claim that the lousy environments
behind the pC porducts are also typical of relational database amangers for
minis.
Products like Paradox, Dbase, etc. INVITE bad designs ...
even though I prefer them to C for writing database systems. (grin)
There are a number of books about Paradox & 3 million similar
books about Dbase, etc. These books teach the mechanics of the
systems, much in the way that pascal & C books teach about language
features rather than software design in language X.
On this campus, the administratyion is developing a number of
inofrmation systems in Revelation ... using student porgrammers who
are not trained in IS design .... you can imagine the quality of
the resulting porducts ....
I have yet to see a good book on the design of database systems
that is aimed at highly interactive products
(rather than at transaction-oriented
mainframe systems).
Have you seen anything of use for professionals or students
of this kind?
Rob Kling
UC-Irvine
I claim that anything that we build -- a command and control system, a
flight reservation system, an operating system, and so on -- has a
conceptual basis; if they didn't we couldn't build them. The notation
for each of these domains can be described by a language which would
provide the syntax, semantics, and even pragmatics for the modeled
domain. Individuals can be easily taught this new "notation", it would
not be any more difficult to learn than another programming language, or 4GL.
|> The lack of a well developed and widely known formal basis for many other
|> computing domains may well prove to be a serious impediment to the
development
|> of reusable software.
True, but if a domain is that misunderstood then we are not building
implementations of them anyway.
I reassert that any artifact that we build can provide components for
later reuse. I also assert that if we have built a component we can
adequately describe it in a formal or informal notation so that its
complete behavior is understandable by a human. I also know that
providing such information for later reuse requires much labor at great
cost. This is the impediment to software reuse. The issue is "given
that you can describe a domain, how do build a information repository in
a cost-effective manner."
As an example, take the domain of data structures. It is a relatively
simple domain. It is well understood. There are both formal and
informal methods for describing modeling representations and storage
structures. We understand the complexity of them all, including special
cases and programming tricks. Yet to date, there is not one reusable
data structure depository. Why is that? Because the costs of putting
all the knowledge contained in existing data structure books and
articles would be incredible.
My thesis work was on managing design information for later reuse. I
took a couple of toy problems -- the Dutch National Flag algorithm and
quicksort -- and described them so that they could be later reused. The
amount of information obviously was subjective. But in each case it
took approximately 5-10 pages to describe them so that I felt someone
else could understand and reliably reuse them. After working on these
simple problems I came to the conclusion that the amount of effort
required to document complex components would be much greater that the
actual construction of said components.
As a result, I concluded that software reuse should only be attempted if
you are producing the same type of artifact over and over again. For
example, if you write 25 Ada compilers for different machines then
software reuse is feasible, but if you only write 3 of them then it
probably isn't. I also concluded that software reuse as many people
envision it is impossible unless some manner of automating the
acquisition of information is achieved. If someone is looking for a PhD
thesis, here it is.
Kirk
...
>I claim that anything that we build -- a command and control system, a
>flight reservation system, an operating system, and so on -- has a
>conceptual basis; if they didn't we couldn't build them.
One must be careful to distinguish between mathematical domain languages
and programming languages here. Generally, mathematical domain languages
being declarative and nonconstructive are more powerful than programming
languages. That makes them useful for concisely expressing precise
specifications for software components -- provided you can find a
suitable mathematical domain within which to represent a particular
software component domain. For example, it might be the case that the
reusable components for flight control systems could appropriately be
represented in the mathematical theory of partial differential equations,
or that those for airline reservation systems could best be described using
graph theory. If furthermore, there turned out to be a Find_Flights
operation among the airline components which `finds all flights from
airport A to airport B of duration less than T leaving on date D', this
operation would have a precise graph theoretical specification which
would answer any questions about what it would do. Note here that
mathematical domains and programming domains differ in that Find_Flights
belongs to the latter, but not the former.
If by a `conceptual basis' you mean a mathematical domain in which
components from a programming domain can be specified exactly, I expect
your claim, taken literally, is correct. The real problem, of course, is
to find an APPROPRIATE conceptual basis [mathematical domain] for describing
components. For example, choosing number theory to describe airline
reservation components (using say Godel numbers), while technically
quite feasible, would produce totally useless specifications.
I presume accordingly that your claim should be interpreted as postulating
the existence of appropriate mathematical domains for describing any
reusable components that will be created. That seems far from obvious.
> The notation
>for each of these domains can be described by a language which would
>provide the syntax, semantics, and even pragmatics for the modeled
>domain.
If one did not know how difficult it is to work out the concepts and
theory for a new mathematical domain, this could almost be read as
suggesting the creation of a theory for each programming domain.
> Individuals can be easily taught this new "notation", it would
>not be any more difficult to learn than another programming language, or 4GL.
Most of us find it fairly challenging to learn a new mathematical domain.
It's not, however, the 'notation' of differential equations or category
theory that gets to you; it's the insights about the concepts and the
theory that takes the time and effort to learn.
Unfortunately, a superficial knowledge of a mathematical theory won't
suffice for programming. A new age programmer who learned his number
theory from a pocket calculator might not know that a program that
sums up an array of integers from bottom to top gets the same result
as one that sums from top to bottom, even though he understands addition
perfectly well at a notational level.
/Bill
It's just that with my work I came to the conclusion that
OOP is not of immediate benefit.
> I feel it is possible to
>seperate the design phase from the construction phase in most instances.
NO NO NO!
My point is that the "construction phase" is NOT really a construction phase
because you still have to DEVELOP. This is the whole problem! The term
"construction" implies a production schedule, which is NOT the case. With
software you are NOT assembling components, you are DEVELOPING new
components and a control structure to surround them. The terms
ARCHITECTURE and ENGINEERING
should be substituted for"design" and "construction" respectively.
> Most feel
>that a program of that size[30000 lines] doesn't justify going through
>the preliminaries due to the overhead.
I agree. By the time the formalities are done, you can half the program
finished.
>Are you kidding? Every engineering firm I ever have been associated with runs
>on deadlines! ...If you don't think that car manufacturers and TV makers work
>under deadlines, guess again.
My point is that these deadlines are by default flexible. You can't market a TV
that does not work, right? Therefore the deadline gets moved whether anyone
likes it or not. And I see evidence that those engineers are not pressured to the
same degree as programmers. Management knows that pressure = errors =
delayed completion = increased costs. (But then you have to weed out loafers
who take advantage of a non-production environment to not produce.)
>I don't think users of software components expect to just 'plug in' a bit
>of code for instant functionality (at least, I hope not!).
I'm getting sick of manager types who have never programmed (MBAs)
talk about OOP this way, as if it is the "final solution" to the software
problem.
> I could have
>done a paper design of the new phone line, but the time required compared to
>the size of the project didn't justify it. You can do the same with software,
>but you don't have to. It's not the only way.
A great observation/explanation. You have made me aware of
something I have wanted to explain to people but didn't know it. When
you have a 3 man month project, it's better just to start it than
spend 1 man-month spec-ing out every detail that a good programmer
will do instantaneously "on the fly".
>Programmers are really getting a raw deal in the
>>current scenario. That's why so many of them change jobs and are
>>generally disillusioned with their career.
> There seems to be a belief that if
>Joe Programmer says he can do it in 5 weeks, he can probably do it in 4 and
>with fewer resources than he claims. ...and that they never learn from the past
>so keep repeating the same mistakes over and over. They get pissed because
>software groups can never come up with an accurate estimate of time and are
>always over the time limits. Why don't they learn?
I think managers - especially MBA types - are (expectedly) insecure about the
"black magic" of programming. The only way for them to get security is
to make themselves feel that they "got it done as fast as possible", and
what a better way than to allocate LESS time than the programmer says
he needs.
Thus, by default they disrespect the programmer and treat him as if he is not
telling the truth. This is the core of how programmers are getting a raw deal.
They are not respected for the work they do - they are treated as if they are
dishonest. (I suspect many WERE dishonest, perhaps making a manager treat
all programmers that way.) They are disrespected for making estimates on
DEVELOPMENT which can't be accurately quantified (you are not assembling
pre-made components).
>This
>leads to nothing but heartburn for the programmers and engineers who are
>saddled with the project.
>Unfortunately, most upper management I have been exposed to couldn't care less
>about the state of employees. They aren't the ones that have to do the hiring.
>It's the line managers that are sucking down Rolaids and trying to figure out
>how to jump through the next hoop.
And they wonder about job burnout...
John Dudeck writes....
>A lot of what is gained in OOD is in the de-sequentializing of our problems.
Ahhh-I got you again!
How does de-sequentializing happen?? Not because of the "magic" of OOP.