Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Who will be blamed and does it stick?

1 view
Skip to first unread message

Michael Schuerig

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

Over on comp.software.testing there are two interesting threads
originated by individuals who are currently working in projects that
appear to be doomed from the start. In the response so far the advice
has been unanimously to get out as fast as possible.

I haven't been involved in professional software development, yet. But
I'm curious what the common consequences of failed projects are. Who
will be blamed? Those people who knew the project would fail, but
against all odds tried to pull it off? Does it have an adverse effect on
their professional reputation?

Does it further software quality if good people leave hopeless projects?
That is, do those projects die quickly and completely or is the agony
even worse?

Michael

--
Michael Schuerig And your wise men don't know how it feels
mailto:uzs...@uni-bonn.de to be thick as a brick.
http://www.uni-bonn.de/~uzs90z/ -Jethro Tull, "Thick As A Brick"

Neil Wilson

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

The problem with failing software projects is that those that are running
them don't know that it is failing and/or won't admit that it is failing. It
takes a great deal of courage to say 'we have a problem here', particularly
if you already spent oodles of money.

The issue is simply bad management and project control. If nobody will
listen to you when you say there is an issue and you persistently get
ignored then you have no option but to move sideways.

Unfortunately you will find it a lot in this industry, given that most of it
is a triumph of marketing over reality.

---
Neil Wilson (Neil@aldur on the demon.co.uk domain)
Aldur Systems Ltd, Ossett, UK


Michael Schuerig wrote in message
<1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>...

Andrew Semprebon

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

Michael Schuerig wrote:
>
> Over on comp.software.testing there are two interesting threads
> originated by individuals who are currently working in projects that
> appear to be doomed from the start. In the response so far the advice
> has been unanimously to get out as fast as possible.
>
> I haven't been involved in professional software development, yet. But
> I'm curious what the common consequences of failed projects are. Who
> will be blamed? Those people who knew the project would fail, but
> against all odds tried to pull it off? Does it have an adverse effect on
> their professional reputation?

In my one experience with such a project, there was a lot of
finger-pointing but it seemed to me that the developers got
off pretty well. The users blamed management and the group
of users that was supposed to represent them. Management
blamed the users. What I found most amazing was that after
working on the project for just a few months, it was apparant
to me that it exhibited all the classic symptoms of a major
software development disaster but nobody in management seemed
to notice. Originally planned for one year, it dragged
on for another 4 years before they got another contractor to
work on it.

Even though it was obviously poorly managed, I stuck with the
project to get experience with C and Oracle. In retrospect,
I should have left earlier.

> Does it further software quality if good people leave hopeless projects?

Not in my (limited) experience.

> That is, do those projects die quickly and completely or is the agony
> even worse?

Unfortunately, it is usually only the good people who realize
the project is in trouble. When they leave, management stops
getting any feedback about how poor the quality is, more bugs
get through testing, etc. The net result is that no-one knows
about the problems until the product gets to the users.

ppgo...@hotmail.com

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

In article <1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>,

uzs...@ibm.rhrz.uni-bonn.de (Michael Schuerig) wrote:
>
>
> Over on comp.software.testing there are two interesting threads
> originated by individuals who are currently working in projects that
> appear to be doomed from the start. In the response so far the advice
> has been unanimously to get out as fast as possible.
>
> I haven't been involved in professional software development, yet. But
> I'm curious what the common consequences of failed projects are. Who
> will be blamed? Those people who knew the project would fail, but
> against all odds tried to pull it off? Does it have an adverse effect on
> their professional reputation?
>
> Does it further software quality if good people leave hopeless projects?
> That is, do those projects die quickly and completely or is the agony
> even worse?
>

I've spent many years working on this very set of problems, and trying
to advise others who are doing the same. Some thoughts:

1. The venerable James Martin wrote that most projects are wrong because
they are the wrong projects. In other words, they are projects that are
aimed at accomplishing things that should not be accomplished. While
this is a very general and sweeping statement, it is impossible not to
encounter anecdotal evidence of it at every turn in the business app
field.

2. Project managers fall in love with their projects. They lose the
ability to make sound judgments because they want the thing to go in a
certain direction, and tunnel vision sets in. The more prone a PM is to
this ailment, the more likely it is that he/she will have an exit
strategy in place, in my experience. The well-timed bailout to another
job is the most consistent response I have seen to the "doomed project"
syndrome.

3. I have seen project leadership outsourced primarily so that risk
can be offloaded from the parent organization. This is a nice way
of saying that these firms basically hired fall guys. If the
project goes into the tank, the failure can be blamed on the outsiders.

4. Immediate danger to careers usually is higher at the top of a
project's heirarchy than at the bottom. In other words, the safest place
to be is down the ladder and out of the line of fire. High-end project
leadership is a risky and stress-intensive line of work. Under stress,
people behave in strange ways. I have seen a PM insist that his people
work on Christmas day in order meet an arbitrary deadline. One
individual's wife had just had a baby about 3 weeks before Christmas;
this guy was at work early on Christmas day and late into the evening.
My recollection of him trudging into the office on Christmas day is one
of my most telling images of this line of work.

5. Who will be blamed? This is usually a political thing. Blame
avoidance, and blame redirection, are skills. I have seen some very high
skill levels in this area. I have seen just plain bad luck result in
innocent persons getting blamed for the failures of others; wrong place
at wrong time.

Opinions about project management and project failure avoidance are
like noses: Everybody in this business has one. How you develop yours
will determine a lot about your career. My advice: Look, listen, learn,
observe, take notes, and trust your instincts. Do not fall in love
with projects, processes, or methodologies. Ask tough questions and
do not settle for weak answers. Stay away from taskmaster project
managers unless you enjoy pain and want your personal life ruined.
Run, do not walk, from *my way or highway* leadership. Read every
book on the subject that you can get your hands on. Distrust people
whose idea of teamwork is to gather a bunch of people and insist that
they be a team. Avoid people who use sports metaphors in their speech,
who talk about *keeping things on track* and use other quasi-military
or quasi-Knute-Rockne langauge. Flee from ambitious project managers.

-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet

Guy Rixon

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

Michael Schuerig wrote:
>
> [...]

> I haven't been involved in professional software development, yet. But
> I'm curious what the common consequences of failed projects are. Who
> will be blamed? Those people who knew the project would fail, but
> against all odds tried to pull it off? Does it have an adverse effect on
> their professional reputation?

From what I've seen, the people most likely to be blamed are those
associated with the project when the project is _seen_ to go wrong.
Sometimes people who make bad decisions early on and then leave don't
get blamed because by the time the project fails, nobody can remember
who said what. The worse the project, the more it over-runs and the
less it keeps records, making this much more likely.

Don't ever be the last out of a sinking project; there are never enough
lifebelts.
--
Guy Rixon, g...@ast.cam.ac.uk
Software Engineering Group, Tel: +44-1223-374000
Royal Greenwich Observatory Fax: +44-1223-374700

Alan P. Burke

unread,
Dec 9, 1997, 3:00:00 AM12/9/97
to

I've recently read a book which I recommend highly; if "blame" is the
subject, you should have a look at Edward Yourdon's 'Death March - The
Complete Software Developer's Guide to Surviving "Mission Impossible"
Projects'. (ISBN 0-13-748310-4, from Prentice-Hall). It reinforces my
perpective that unrealistic expectations are the worst contributors to
failure.

ppgo...@hotmail.com wrote:
>
> In article <1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>,
> uzs...@ibm.rhrz.uni-bonn.de (Michael Schuerig) wrote:
> >
> >
> > Over on comp.software.testing there are two interesting threads
> > originated by individuals who are currently working in projects that
> > appear to be doomed from the start. In the response so far the advice
> > has been unanimously to get out as fast as possible.
> >

> > I haven't been involved in professional software development, yet. But
> > I'm curious what the common consequences of failed projects are. Who
> > will be blamed? Those people who knew the project would fail, but
> > against all odds tried to pull it off? Does it have an adverse effect on
> > their professional reputation?
> >

> > Does it further software quality if good people leave hopeless projects?
> > That is, do those projects die quickly and completely or is the agony
> > even worse?
> >
>
> I've spent many years working on this very set of problems, and trying
> to advise others who are doing the same. Some thoughts:
>

...snipped a whole bunch of stuff I agree with and which is reinforced
by the book.

Alan


Tim Littlefair

unread,
Dec 10, 1997, 3:00:00 AM12/10/97
to

Alan P. Burke wrote:
>
> I've recently read a book which I recommend highly; if "blame" is the
> subject, you should have a look at Edward Yourdon's 'Death March - The
> Complete Software Developer's Guide to Surviving "Mission Impossible"
> Projects'. (ISBN 0-13-748310-4, from Prentice-Hall). It reinforces my
> perpective that unrealistic expectations are the worst contributors to
> failure.
>
> ppgo...@hotmail.com wrote:
> >
> > > I haven't been involved in professional software development, yet. But
> > > I'm curious what the common consequences of failed projects are. Who
> > > will be blamed? Those people who knew the project would fail, but
> > > against all odds tried to pull it off? Does it have an adverse effect on
> > > their professional reputation?
> > >

Another book that may be of interest is Gerald Weinberg's
'Quality Software Management' I'm reading this at the moment and the
early
chapters develop a link between the SEI's CMM levels, earlier work from
outside the software field by Philip Crosby. Weinberg doesn't like the
word 'maturity', but he develops his own 6 level classification, of
which five levels map closely to the CMM and the 6th is a case the CMM
does not cover (software developed by end users for themselves).
The interesting thing is that one of the key features
differentiating the levels he identifies is how blame is generated and
distributed. In level 0 (end user development), blame is not a factor
because the end users are always satisfied with their own work
(unfortunately
all problems cannot be solved at this level). In level 1, the focus is
on technical superstars, and if a project fails it will be blamed on
them.
At level 2, project management is seen as the driving force, and that is
where the blame will lie. The discussion did not seem (I am working
from
memory) to say as much about how blame goes around at the higher levels,
but these account for a pretty small proportion of all development
around the
world (and hopefully less than their fair share of fiascos).


--
Tim Littlefair MITS(WA) / Edith Cowan University
Please check out the following location home page for my MSc project on
Software Metrics for C++ development:
http://www.fste.ac.cowan.edu.au/~tlittlef


Martin Tom Brown

unread,
Dec 10, 1997, 3:00:00 AM12/10/97
to

In article <1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>
uzs...@ibm.rhrz.uni-bonn.de "Michael Schuerig" writes:

> Over on comp.software.testing there are two interesting threads
> originated by individuals who are currently working in projects that
> appear to be doomed from the start. In the response so far the advice
> has been unanimously to get out as fast as possible.
>

> I haven't been involved in professional software development, yet. But
> I'm curious what the common consequences of failed projects are.

A very good read for this is Project FOUL in Philip Metzger's book
"Managing a Programming Project" it follows the life history
of a project headed off the rails as an illustration.
It may be out of print - mine's an old copy.

> Who will be blamed?

It's hard to say - senior management can act at random when stressed.
I once saw a publicity manager (responsible for photos/ads/brochures)
who did a great product launch fired because the product didn't work.
His sin was that he didn't have a tidy desk.

> Those people who knew the project would fail, but
> against all odds tried to pull it off?

If you *know* the project will fail and have confidence in your
own professional judgement then I personnally would get out.
(And have done in the past)

> Does it have an adverse effect on their professional reputation?

If you decide the thing can fly even when no-one else does, or only
a handful of techies do *and* pull it through there is a big bonus.
Expect lots of late nights and considerable pain. (I have done this too)

Failure may be fairly disastrous in the "heads must roll sense".
Usually, but not always it's project managers that get fired.

> Does it further software quality if good people leave hopeless projects?

I would say yes, but it makes it tougher for those that stay.

> That is, do those projects die quickly and completely or is the agony
> even worse?

Out of control projects seldom die quickly or gracefully.

See Metzer's book for a brilliant example.

Regards,
--
Martin Brown <mar...@nezumi.demon.co.uk> __ CIS: 71651,470
Scientific Software Consultancy /^,,)__/


Insiguru

unread,
Dec 13, 1997, 3:00:00 AM12/13/97
to

I have yet to learn the art of the comment. However, the thread is very
interesting to me and I have so many comments. So I put together this article
directed to IT organizations. Commercial developers and contractors may have a
different perspective.

In my humble opinion, the cause of project failures in IT organizations
generally fall into four problem categories, development strategy, requirements
elicitation, estimating, and tracking and control. Sound almost like CMM level
2? You are right. If the CMM included requirements elicitation, requirement
management would be called requirements engineering.

A. Development Strategy and Requirement Elicitation Problems

As a first step in requirements engineering, context analysis to a significant
degree defines the development strategy. For example, suppose the analysis
showed that neither users nor developers have experience in the application.
The competition has this type of system and they want one. Because users lack a
clear understanding of their needs, using the conventional, waterfall
development process may result in excessive changes and instability. The
requirements would be just too volatile and the project will eventually fail.

Context analysis also identifies the most effective requirements elicitation
methods. Under the same circumstances, the typical JAD and rapid prototyping
approach will not substantially improve user knowledge of the requirements.
They need education about the domain. A domain analysis prior to JAD would
identify to the users and the developers, the commonality and variability
among in features and capabilities. This will prevent the "build the first one,
learn from the it, throw it away, and then build one that works" approach. That
fine and dandy, but guess who got the blame for the first one?

Solutions

Context Analysis -- Context analysis helps identify the development strategy
including the selection of the software process life cycle, tools, and project
organization. It includes an organizational context, environmental context, and
project context. The organizational context identifies the users, their
business situation and desired results, culture, and political dynamics. The
environment context identifies hardware and software constraints, system
boundaries, and interface systems. The project context identifies project's
personnel, problem, product, and resources characteristics.

Domain Analysis -- Domain analysis as a requirement elicitation tool will solve
many problems. Providing users and developers with a clear understanding of
existing system features and functions, the problems solved, the solution
architectures, and boundaries, give projects a much better chance of success.
Domain analysis resolves problems associated with users and analyst language
miscommunications, users who don't fully know their needs, computer illiterate
users, analysts with limited application experience and other problems.

Tools - A risk assessment tool that takes results of the context analysis and
defines the most appropriate life cycle model, the most opportunistic tools and
project organization. .


B. Project Estimating & Prediction

Project Estimation. Under certain circumstances, gaining the ability to make
accurate estimates is more critical to project success than hiring the best
developers, providing the best training, purchasing the best tools, and
improving our processes. With insufficient time and resources, a project team
consisting of the world's best developers using the most effective processes
supported by the best tools and methods would fail to meet commitments. On the
other hand, an inexperienced team that estimates accurately is considered
successful. Consider the following remarks, regarding predictability, about a
real project from the leading CMM expert at the SEI.

One of my favorite examples was a real-time control system in a CMM level 3
organization. It was 987 function points. The organization was a maintenance
organization that was being tasked with development work. They had no Windows
experience (and this was a Windows environment). They had no OO experience (and
this was a OO system). They never used a 4GL before (they were using Delphi on
this project). I normally would consider this project a high-risk one ... and
in fact the project manager was very disappointed because they had a 25%
overrun. His boss pointed out to him that that wasn't too bad given the
circumstances ;)

Although projects where the team lacks technical or application experience are
very common, they are always considered high risk. However, high risk projects
are often the vehicle for the competitive breakthroughs that marks the winners
from the losers. Successful management of high risk project enable companies to
capitalize on new technologies and advances in IT and increases the ability of
the business organization to use IT for competitive breakthroughs.

Solutions
Organizational Learning. In the process of developing software products, we
create knowledge that is generally been discarded. By maintaining this
knowledge, however, we can through organizational learning systematically
expand our capacity to develop software. In other words, knowledge
manufacturing organizations can use information from the current project to
improve the outcomes of future projects or prevent decisions that cause project
failures.

Competitive pressures are forcing many organizations to eschew the ease of use
of generic mathematical models for the precision and accuracy gained from
developing internal models tailored to specific organizational, product, and
project characteristics. There are some companies that are generating estimates
that come within plus or minus 3% of actuals. Users of generic models cannot
compete with these organizations.

Estimating Tools. The most accurate estimate of future performance is previous
performance from similar projects. Knowledge manufacturing organizations need
case-based tools to manage software projects. With case-based tools, a project
team estimates, plans, schedules, tracks, controls, and evaluates projects
based upon similar history projects. In this case, an estimator develops a
project profile of the new project and subsystems and searches through the
database for Windows, 4GL, Delphi, OO, real time, and teams with very low
technical experience. The more specific the better. These projects become the
reference projects for the new project estimates.

Sure, the estimates are based on low productivity projects, that's expected.
However, management now has options. They improve the schedule and productivity
by replacing the staff with those experienced in the technology, by adding
senior people although inexperienced in the technology, or by revising the
technology requirements. Impossible projects occur because in the absence of
information to the contrary, wishful thinking prevails. With case-based
estimating, software developers have a defense against the unreasonable project
demands that often result from the lack of knowledge.

Feedback Tools. IT organizations need feedback relative to the relationships
between project performance and project characteristics. We need to measure
performance objectively. Based upon previous projects, what range of
productivity, quality, functionality, and schedule and budget performance can I
expect from this project? In our example, our project manager's boss thought
that she did a good job. Based upon previous similar projects, does a 25% cost
overrun while meeting quality and functionality objectives represent superior
project performance?

Another area of analysis is extenuating circumstances and management decisions.
Certainly our project represent an extenuating circumstance. We need to analyze
the impact of red flags like very low team experience, preset budget and
deadlines, and uninvolved users on project performance. To prevent impossible
projects, we must train the managers and users to the probable impact of their
decisions. Showing an uninvolved user that 100% of past projects with
uninvolved users had very low quality, high costs, low productivity and
functionality may garner some attention from somebody. Without information we
all are ignorant, managers, users and software people.

B. Project Tracking Problems

Ineffective Project Control -- Now that we have a effective development
strategy, requirements that meet customer needs and the deadlines that are
possible, we need to keep the project on track. Project tracking and control
consists of three major elements, risk management, task management, and metrics
management. Risk management is always an essential element because we don't
want the project torpedoed. Task management micro manages individual tasks.
Metrics allows projects to continue on an orderly course so that team members
can complete individual tasks on time.

Solution

Organizational Learning. The first solution is to align the assumptions and
prediction of the tracking mechanism our particular project. A case-based
reasoning approach involves the retrieval of relevant experience based upon
case histories of similar past projects. The basic premise is to use same
reference projects that we used in estimating to develop baselines or "models
of success". As long as software metrics fall within these baselines, the
project is acting like sibling history projects.

Tools. Knowledge manufacturing organizations need case-based tools that resolve
potential problems before commitments are put at risk. We need metrics to
identify out of balance conditions. We need metrics to objectively answer
questions relative to the out of balance condition. We need expert system
technology based upon project management rules and knowledge bases to rank the
reasons for the out of balance condition. The rules and knowledge bases come
from extensive research efforts to identify the rules expert managers used the
practical management of software development. We need lessons-learned from
previous projects to makes recommendations that will resolve the problem.

I guess what I am try to say here is that metrics and the right software is the
best thing that ever happened to software folk.


Brent Rollings

unread,
Dec 16, 1997, 3:00:00 AM12/16/97
to

Michael Schuerig wrote in message
<1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>...
>I haven't been involved in professional software development, yet. But
>I'm curious what the common consequences of failed projects are. Who
>will be blamed? Those people who knew the project would fail, but
>against all odds tried to pull it off? Does it have an adverse effect on
>their professional reputation?

I was involved in the Archetypal doomed project last year at another
company.
I was the second Team Lead under the third Department Manager. (The bodies
were piling up quickly.) The consequences that I suffered during the
project
were long hours away from my family. The project would ultimately be
destroyed by mis-management.

In one meeting, I was threatened by the Product Manager. He said that I
should
go along with his proposals in a presentation to the CEO or I would loose
my job. My
response was "I'm marketable what about you?". When I got fed up with
that, I
found another job at UNIPAC, a company committed to quality development
practices.
This included at 25% increase in my salary and better working conditions.

Ultimately, the old Project Manager is still working in a job he is ill
prepared to
do. He seems to have suffered no negative consequences. A joke among the
Development staff was that this individual was made of teflon. Absolutly
nothing
ever stuck to this person. The company has now replaced more than 50% of
its
programming staff with high priced consultants and is no closer to
completing the
project than they were 12 months ago.

--
Brent Rollings - Senior Analyst
Unipac Service Corporation
===================================
My opinions are purely my own irrepsonsibility.

ppgo...@hotmail.com

unread,
Dec 17, 1997, 3:00:00 AM12/17/97
to

In article <19971213073...@ladder01.news.aol.com>,

<huge snips from excellent material>

> Solutions
> Organizational Learning. In the process of developing software products, we
> create knowledge that is generally been discarded. By maintaining this
> knowledge, however, we can through organizational learning systematically
> expand our capacity to develop software. In other words, knowledge
> manufacturing organizations can use information from the current project to
> improve the outcomes of future projects or prevent decisions that cause project
> failures.
>

Excellent, excellent point. Peter Senge's "The Fifth Discipline"
reveals a lot about the concept of the Learning Organization.
If I had to point to one area in which SE is weak, remains
weak, figures out ways to stay weak (I am getting my shorts
into a bunch now, as I write) it is this one. My line of work
has taken me into many shops and many projects, often
as an observer. I have asked this question of dozens and
dozens of practitioners and project managers from coast to
coast: What process do you follow, after any project or
project phase, to gather and review what has been learned,
in any formal way, so that this knowledge can be applied
to future projects or phases?

Without exception, over a period of 10 years and God knows
how many miles of traveling to how many sites, I have never
had anyone answer this question with a description of such
a process. I get blank stares. I get silence. I get
great sighs and shaking of heads. I get "I wish we could
do this, but we just do not have the time or resources."

What are we thinking, I often wonder, to call ourselves
an engineering practice, when we do not formally learn from
our endeavors?

Lobsang Gyalpo

unread,
Dec 18, 1997, 3:00:00 AM12/18/97
to

In article <01bd0a68$813f5590$6a060a0a@uhq-ws-1761>, "Brent Rollings" <brol...@msm.unipac.com> writes...


>Michael Schuerig wrote in message
><1d0z0ck.1n5...@rhrz-isdn3-p51.rhrz.uni-bonn.de>...
>>I haven't been involved in professional software development, yet. But
>>I'm curious what the common consequences of failed projects are. Who
>>will be blamed? Those people who knew the project would fail, but
>>against all odds tried to pull it off? Does it have an adverse effect on
>>their professional reputation?
>

The phases of a software project are as follows:

Enthusiasm
Disillusion
Panic
Search for the guilty
Punishment of the innocent
Praise and honours for the non-participatants

Lobsang


Insiguru

unread,
Dec 18, 1997, 3:00:00 AM12/18/97
to

This is a response to an e-mail that wanted to know of any relationship
between the CMM and organizational learning and did I know of any
CMM/organizational learning based organizations.

"The first step in the improving the programming development process was
learning how to make and meet budget and schedules. Having learned the ability
to predict resources and produce consistent results, programming is now in a
position to make significant further advances,

Without change control, test-case tracking, statistical databases, and
structured requirements, the programmers often repeat past mistakes or re-solve
the same problems. Process discipline reduces such waste, permits more orderly
learning, and allows the professionals to build upon the experiences of others.


There is no magic route to process discipline. It requires our dedication to
continuous growth and improvements. Gains, once made must be retained, and the
ingenuity our people must not be lost solving problems that have already been
solved " Excerpts for Watts Humphey, IBM System Journal, Vol24, NO 2, 1985,
ps77 and 78.


Need I say more? Organizational learning is deeply embodied in the vision that
eventually resulted in the Capability Maturity Model. Indeed, it is an
essential element. The integrated software management KPA, represents the
organizational learning capability of the CMM. However, the tool itself could
be implemented at level 2.

From what I read, I would certainly consider that Boeing's STS very effectively
combines process discipline and organizational learning. I also think that
their successes was directly attributed to such a combination.


Cheers

R. Mathis


PTedesco

unread,
Dec 20, 1997, 3:00:00 AM12/20/97
to

>What process do you follow, after any project or
>project phase, to gather and
>review what has been learned,
>in any formal way, so that this knowledge can
>be applied
>to future projects or phases?

Also in Peter Senge's "Fifth Discipline" there is a
full description about using this knowledge directly
to study the new project requirements. This process
is probably better described in the Field Handbook....
--------------------------------------------------------------------------
----------------
http://members.aol.com/PTedesco/cognitor.html
Paul Tedesco's home page covering software engineering, business process
reengineering, reuse, and artificial intelligence.

Insiguru

unread,
Dec 24, 1997, 3:00:00 AM12/24/97
to

>What are we thinking, I often wonder, to call ourselves
>an engineering practice, when we do not formally learn from
>our endeavors?

I think that problem is in the beliefs systems of software industry. Software
practitioners embody a set of beliefs, rational or non rational, unconscious or
unconscious, that become part of the overall development culture.

1) A popular belief of software developers and project managers is that history
data will be used against them. As a result, they resist the benefits of
organizational learning. Their nemesis, however, is not enough information. It
is the lack of information causes the dysfunctional development cultures and
the bitterness expressed in other threads. Without proof based upon
organizational experiences, software developers and project managers succumb to
unrealistic deadlines, attempt impossible projects, and receive the blame when
they fail. The only opportunity for success is data, metrics, and
organizational learning. You cannot get into trouble when you are successful.
By rejecting the collection of internal history and metrics, they are shooting
themselves in the foot.


2) Other popular myths are that software projects cannot be successful, large
software projects cannot be successful, or projects can meet only two of the
four commitments, quality, functionality, schedule, and budget. Organizations
like Raytheon, Software Engineering Laboratory(SEL), and Boeing STS have proven
that large projects can meet all four commitments. All three organizations use
systematic learning and knowledge reuse to improve creased technology
management, receive awards, and transfer new competencies to the market place.


Although the process improvement strategies of all three organization varies,
the common factor is knowledge creation. Raytheon's Process Data Center is a
major reason for it being the second winner of the annual IEEE Computer Society
Software Process Achievement Award while the SEL with it's Experience Factory
is the award's first winner. I am sure that an award for Boeing STS is in the
works somewhere.

3) The myth of the accuracy of the cost model. So many people believe in cost
models that I feel like the Xmas Grinch. Uncalibrated cost models are
disasters. A SEI report, Survey of Commonly Applied Methods for Software
Process Improvements, CMU/SEI-93-TR-27 disclosed that a 1987 study found that
the error rates of the four most popular models without calibration were often
in the 500 to 600 percent range. It disclosed that a 1990 study advocated never
using cost models by themselves, because they are not accurate enough. It also
disclosed that a 1991 study claimed that no studies confirm the accuracy and
usability of current models. The SEI study urged great caution, for without an
experience database for calibration, software cost models had little or no
value.

Calibrated cost models provide better performance. However, collecting data for
software models calibration opens the door for creating knowledge. Do
estimators need organizational learning?
I think so. A Jet Propulsion Laboratory study showed that experienced
estimators who forecast frequently(at least every six months) on average
estimated 12% above actuals. Estimators who estimated less frequently( at
greater than six month intervals) on average estimate 44% lower than actuals.
Such a disparity highlights the need for a corporate memory in the form of an
experience database.


0 new messages