Google Группы больше не поддерживают новые публикации и подписки в сети Usenet. Опубликованный ранее контент останется доступен.

Defect Costs

9 просмотров
Перейти к первому непрочитанному сообщению

SCT Technology

не прочитано,
4 мар. 2008 г., 12:25:1704.03.2008
Greetings,
I have been tasked with performing defect cost analysis here at work.
Here's where I'm currently at with my initial data gathering &
brainstorming (it is my hope that others will benefit here):

- The defect value will be generic/average defect costs
- We want as much data as possible for a "defensible" cost analysis
- Cost assessments will be across various phases in the SDLC
(Requirements, Unit Testing, Integration & System Testing, Production)
- Amount of work needs to be determined which "may" include the
following time variables: testing, daily defect calls (includes triage/
management resolution), development, retest & closure
- Defect history can be used to detect time periods (when opened,
assignments, status change, etc)
- We will probably go back and analyze the past 2-3 releases
- Phase durations will be determined based upon release dates
- Average cost to fix a defect will be based upon the following
formula:
(# of People <times> # of days) <times> (cost person-day) <divided
by> #Fixed Defects

There has been some discussion on using costing units which uses
industry standard multipliers (the multiplier is applied to the actual
numbers we get in our cost assessment). I'm not familiar with costing
units & maybe someone can elaborate on this?

Q1: Is it critical to have specific applications in mind with this
analysis or should I just pick random defects and track the values?

Q2: What significance can be found with defects when analyzing product
versus project? (if this is even necessary given Q1 above)

Q3: Are there other industry resources available for defect cost
analysis? (ie., good web sites or white papers)

This is good for me just to do a brain dump on this issue. If I'm
missing something above or if anyone has any wisdom on approaching
this issue, please comment.

H. S. Lahman

не прочитано,
4 мар. 2008 г., 15:45:0504.03.2008
Responding to Technology...

> Greetings,
> I have been tasked with performing defect cost analysis here at work.
> Here's where I'm currently at with my initial data gathering &
> brainstorming (it is my hope that others will benefit here):

You might want to do a literature search. Entire books have been written
about this sort of thing. "Return on Software" by Steve Tockey is an
example, though perhaps a bit to general for your purposes. There are
also models of defect cost through the software life cycle in the SQA
literature (e.g., "Successful Software Process Improvement" by Robert
Grady and "Software Metrics" by Fenton and Pfleeger both talk about
defect cost and models, though the metric books is also rather general).

> - The defect value will be generic/average defect costs
> - We want as much data as possible for a "defensible" cost analysis
> - Cost assessments will be across various phases in the SDLC
> (Requirements, Unit Testing, Integration & System Testing, Production)

Most empirical models suggest that the cost of escapes to the field
dwarfs all of the defect costs encountered prior to release combined.
Unfortunately things like lost market share due to a bad reputation for
reliability are difficult to quantify.

> - Amount of work needs to be determined which "may" include the
> following time variables: testing, daily defect calls (includes triage/
> management resolution), development, retest & closure

It is usually good to distinguish between direct costs accrued against
specific defects and the overall costs associated with defect prevention
and detection. For example, you are going to have to design, implement,
and execute test suites regardless of how may defects you actually find.

That cost can be allocated to the actual defects as an average per-unit
cost. But that can be misleading. That average cost will rise when
reliability improves over time. Does that mean the testing is not cost
effective? Hardly.

> - Defect history can be used to detect time periods (when opened,
> assignments, status change, etc)
> - We will probably go back and analyze the past 2-3 releases
> - Phase durations will be determined based upon release dates
> - Average cost to fix a defect will be based upon the following
> formula:
> (# of People <times> # of days) <times> (cost person-day) <divided
> by> #Fixed Defects

You probably want finer granularity than this. Most process-oriented
shops track separately at least diagnosis time, repair time, and
verification time. That's because the things you do to reduce diagnosis
time (built-in trace, assertions, etc.) are usually different than those
to reduce repair time (maintainability dependency management) or
verification (test automation). To properly evaluate process
improvements you will need to distinguish between those things.

You did not mention defect classes. These are very critical to
incremental process improvement. In addition, having different defect
classes facilitates models that predict field reliability. [Such models
converge very quickly if there are 10-12 defect classes and 4-5 stages
where defects are found, even when fault coverages have large error
ranges. The value of such convergence is that you can determine exactly
how many defects were actually inserted even when testing is not very
effective. That can be used to trigger both process improvement and more
testing.]

> There has been some discussion on using costing units which uses
> industry standard multipliers (the multiplier is applied to the actual
> numbers we get in our cost assessment). I'm not familiar with costing
> units & maybe someone can elaborate on this?

Bear in mind that industry averages are a temporary solution until you
get your own track record. IOW, there is no substitute for using real
data from your own environment.

There is nothing magic about costing units. A costing unit is a regular
unit cost that has been normalized in some way to make comparisons
across different things <more> comparable.

For example, you could define the unit cost of defect repair in terms of
developer effort hours per defect. That might be fine for some purposes,
but it may depend upon the developer skill level. So the cost per defect
might depend on who did the repair. A similar problem exists if the cost
is expressed directly in money; one probably got the money by
multiplying effort hours by some average salary value.

One possible solution is to come up with a different unit of cost, such
as salary-effort. One computes the salary-effort cost = developer effort
hours * developer salary. If one can assume that the highly skilled
developers will be paid more than lower-skilled developers, this
produces a more comparable value because the higher-skilled developer's
higher salary will be multiplied by proportionately fewer effort hours.
So if the salary is actually perfectly matched to productivity, one
should get the same salary-effort cost from higher- and lower-skilled
developers for a given repair.

> Q1: Is it critical to have specific applications in mind with this
> analysis or should I just pick random defects and track the values?

If I understand your question, I think the answer is: both. Usually one
tracks defects for specific applications and cumulatively across
applications. One reason to seek out a normalized cost unit is so that
application-to-application comparisons are valid. Another reason is so
that cumulative metrics aren't collecting apples & oranges.

>
> Q2: What significance can be found with defects when analyzing product
> versus project? (if this is even necessary given Q1 above)

Do your projects span products? If not, multiple projects probably map
to each product. In that case if the products are comparable, then the
projects probably are as well.

>
> Q3: Are there other industry resources available for defect cost
> analysis? (ie., good web sites or white papers)

Google is your friend. B-)

--
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
in...@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH

Wayne Woodruff

не прочитано,
5 мар. 2008 г., 06:32:5705.03.2008

>Greetings,
>I have been tasked with performing defect cost analysis here at work.
>Here's where I'm currently at with my initial data gathering &
>brainstorming (it is my hope that others will benefit here):

I'm curious to know what you plan to do with this metric? A wise man
once asked me "What action will be taken as the result of a
measurement?". If you have no plans to use this information to make a
change, you will spend a lot of time and $$ computing that will yield
no benefit.

You need to clearly define what you mean by "defect cost analysis".
You might want to google 'Operational Definition". The purpose is to
clearly define the data collection, how the data will be collected,
how the data will be maniplulated, and how it will be presented, among
other things. Stakeholders must agree on the OD - it should be
inspected/reviewed by them.

>- The defect value will be generic/average defect costs

>- We want as much data as possible for a "defensible" cost analysis

The OD will help here.

>- Cost assessments will be across various phases in the SDLC
>(Requirements, Unit Testing, Integration & System Testing, Production)

>- Amount of work needs to be determined which "may" include the
>following time variables: testing, daily defect calls (includes triage/
>management resolution), development, retest & closure
>- Defect history can be used to detect time periods (when opened,
>assignments, status change, etc)
>- We will probably go back and analyze the past 2-3 releases

Sounds like you are going to dive deep. I think you are going to
become frustrated trying to objectively collect this level of detail.

I'd suggest not going so deep on the last 2-3 releases. Take
measurements from those releases to understand what you don't know,
them make changes to collect better going forward.

I'd suggest 'simply' measuring phase containment. What phase was the
defect detected and what phase was it introduced.

>- Phase durations will be determined based upon release dates


>- Average cost to fix a defect will be based upon the following
>formula:
> (# of People <times> # of days) <times> (cost person-day) <divided
>by> #Fixed Defects

>
>There has been some discussion on using costing units which uses
>industry standard multipliers (the multiplier is applied to the actual
>numbers we get in our cost assessment). I'm not familiar with costing
>units & maybe someone can elaborate on this?

I don't know about 'industry standard' multipliers, but most fling out
a cost factor of 10x per phase escaped.

>
>Q1: Is it critical to have specific applications in mind with this
>analysis or should I just pick random defects and track the values?
>

What is the population of defects? 100? 500? 10,000?

>Q2: What significance can be found with defects when analyzing product
>versus project? (if this is even necessary given Q1 above)
>
>Q3: Are there other industry resources available for defect cost
>analysis? (ie., good web sites or white papers)
>
>This is good for me just to do a brain dump on this issue. If I'm
>missing something above or if anyone has any wisdom on approaching
>this issue, please comment.

I wish I had more time, but I gotta run.

Wayne Woodruff
http://www.2zars.com

Vladimir Trushkin

не прочитано,
5 мар. 2008 г., 08:38:5305.03.2008

Unless your organization is significantly different from what other
organizations look like, I would rather stick to the industry
standards. Spending a lot of efforts just to convince someone that
defects are less expensive when removed early from the product appears
to be overkill to me.

----
Best Wishes,
Vladimir

developer

не прочитано,
6 мар. 2008 г., 01:11:2706.03.2008

Not only is it critical to have specific apps in mind, but it is
critical to have specific types of defects in mind.

IME, the WORST type of defect to have is something which sends bad data
to another system which cannot recover from the bad data. (I'm sure
there are other types which are just as costly, but this is IME - 20
hour day from hell, working with people on another system that just
can't handle correct data once bad data has been sent).

Other types of defects would include generating bad data which is not
noticed for several days and then being faced with correcting the
initial problem and then correcting all the bad data, which by that
point has propogated to other systems and even customer invoices.

And then there's the type of defect which brings an entire plant to its
knees - ever had a plant manager yelling at you "We've got 600 union
employees that we're paying an average of $30 an hour to sit around and
do nothing because your system is down!!!".

Yep - that's a critical defect allright.

Michael Bolton

не прочитано,
6 мар. 2008 г., 11:33:1206.03.2008
> Unless your organization is significantly different from what other
> organizations look like, I would rather stick to the industry
> standards.

When there is no standard industry, the idea of industry standards
seems rather feeble. Even if there were industry standards based on
anything other than folklore, how would you know if the information
applied to your organization, your market, your product, your people,
your bugs, your features, etc., etc., etc.?

To the original poster: you're involved in a measurement exercise,
and all of the cautionary message you've received so far are worthy of
consideration. Before you go further, I'd recommend that you read
Kaner and Bond: http://www.kaner.com/pdfs/metrics2004.pdf; and read
Weinberg, Quality Software Management, Volume 2: First-Order
Measurement. What you want to beware of, in particular, is turning
rich information (stories about bugs, problems, risks, value) into
impoverished data (numbers).

A reminder: cost and value are not the same thing, they're not
inversely proportional, and they're both highly multivariate.

---Michael B.

0 новых сообщений