First of all this is not a homework assignment, but this is something
that I have been investigating lately..
I do not work in industry, but would like to know what aspects of the
inspection process you do not like (or find useful). I do know
inspections can be quite effective in detecting defects, but many
organization do not use them for various reasons. I am reading articles on
inspection, but I do not feel that they provide me the insight that I
want that those of you who work in software development on a daily basis
Thanks in advance,
Well, first of all, many organizations do not use anything. They just
throw six programmers in a room, and expect them to be keying in code
all day long. Sometimes these same organizations will hold long
department meetings where they *talk* about doing something --
anything -- to establish some kind of defined process, but they never
Is there something about inspections, that makes them less used than
practically any other aspect of development that doesn't involve
keying and compiling? I doubt that many fewer people do inspections
(do you mean, walkthru's with several people present, or individual
developers sitting and reading their own code? not sure it matters,
just wondering) than do, say, written requirements.
Another question is whether inspections of code, which is what I
assume you mean in one flavor or another, is as you have stated,
actually effective at detecting defects. It is a "white box",
bottom-up tactic, where generally I favor top-down, "black box"
approaches. Humans do not read code well, even well-structured and
documented code. If there is a *theoretical* reason not to use
inspections, I'd say that would be it.
Generally people do not like their work being criticised.
Thus inspections can often be painful.
The problem is made worse by there being no universally accepted
criteria of what makes a good program. Therefore there will always be
scope for arguments.
If the inspections limited themselves to looking for actual bugs
rather than finding problems with the style of the work, that
would be more pleasant but perhaps less effective.
I once attended an inspection of a document where again there
was plenty of argument, and the author(victim!) became visibly
Part of the problem may be that the purpose of the
document was not adequately defined before-hand.
To improve the efficiency of code reviews I suggest that
firms first create coding standards that all the programmers
agree on. There also needs to be agreement on the relative
importance that should be given to efficiency as against
readability in the code.
d...@ftel.co.uk My views only.
The theory. Inspections performed at a measured pace that carefully
work over the product (source or document) can remove a lot of
problems before they become major problems.
The practice. Thou shalt do inspections, go into that room and
inspect! Results degrade into a major battle of stylistic issues,
real problems are not removed. Management declares that that was a
waste of time and it is abandoned.
The practice 2. Some bright spark reads about it, since they do not
want to waste time they inspect after testing has taken place. Since
the product is tested there is a big barrier to any change that is not
a serious problem.
To make inspection work you need:
1. A checklist.
2. Preparation document trivial issues before you walk in.
3. A slow metered pace that targets defects
4. A cycle to ensure defects are corrected.
I use inspection as a tool extensively. It must be done very early, I
do it immediately after a clean compile. The formal stuff does not
need to be onerous simply hand write notes in the meeting and mark up
trivial issues on the listing itself. Use the notes on the next cycle
to ensure that everything is complete.
I have a document on inspection (word 6) on my web site if you are
interested, with coding checklists.
Since we started doing this I have asked myself how I feel about
the act of inspecting code? How I feel about this being part of
what my job is?
At one level I recognize the potential value of doing inspections,
if done correctly, and I want to support doing them, if done correctly.
But at another level I don't enjoy doing them at all.
What I'm not sure of is what I don't like about them. My best quess
is that it is because much of what I do is automated. I have tools
for building code and debugging it. When you ask me to inspect it
and give me no tools I feel like I've been asked to read by candlelight.
I get a strong urge to write a perl/awk/sed/anything! script to run
the code through.
Of course there are other reasons for concern like the potential for
resentment and other interpersonal conflicts that might arise. But
I'm less concerned about those because I expect professional engineers
to act like grown ups (old fashioned attitude ;-).
On the other hand if we talk about inspecting documents (which we
haven't done in a Fagan context) it seems natural to me. When I get
a requirements document what I do with it is inspect it. That is the
norm. I have to inspect it in detail and determine who I can apply
the tools I have to satisfying those requirements. A design document
would be the same. They are prose and as an engineer my job is to
systematically inspect them.
> First of all this is not a homework assignment, but this is something
> that I have been investigating lately..
> I do not work in industry, but would like to know what aspects of the
> inspection process you do not like (or find useful). I do know
> inspections can be quite effective in detecting defects, but many
> organization do not use them for various reasons. I am reading articles on
> inspection, but I do not feel that they provide me the insight that I
> want that those of you who work in software development on a daily basis
> can provide.
> Thanks in advance,
Senior Software Engineer
Genetics Computer Group, a subsidiary of Oxford Molecular
I'll take a stab:
Formal inspections are very resource-intensive. Informal inspections are
much less so.
Both provide a very powerful incentive to get the code right the first
time. IMHO, about 90% of the goodness of inspections comes from knowing
beforehand that someone else will be giving your code a thorough review;
the other 10% goodness comes from finding actual bugs.
So even if an informal review finds many fewer actual bugs, but costs
half as much and provides roughly the same up-front incentive, the end
result is bigger bang/buck.
Another point -- way back when I was learning how to do inspections, the
idea was continuously stressed that formal inspection is a damned
expensive proposition. The ONLY way to make it pay off is to apply the
Deming principles of instrumenting the process (inspection and
inspections results), then feeding the results back to enhance the
process, with the goal of eliminating problem creation upstream rather
than catching the problems downstream. This eventually allows you to cut
way back on the expensive part, the inspections.
Unfortunately, it's an extremely hard sell to get resources devoted to
analyzing code inspection results and developing process changes.
Without that feedback loop, the cost of inspection has to be justified
only against the cost of not finding the problems that are found by the
inspection. Thus inspection becomes a much easier to justify early in
the development process, for requirements, spec and design documents,
because problems not found at those stages quickly snowball in cost.
Finally, a very persuasive argument can be made that code inspection is
a difficult, one shot deal for humans (reading by candle light, as
someone said). Compilers and comprehensive testing are much better at it
than we are, and they can be repeated with a key stroke.
I'm a big believer in:
1) formal inspection of project documents
2) informal review and intensive (repeatable) testing of code
Bill Kilgore (DEC-certified inspection moderator)
Share your life -- be an organ donor
Sent via Deja.com http://www.deja.com/
Before you buy.
I don't think anybody, deep down, likes seeing all the blemishes in
their work. This is an area that's critical to control - to make sure
that comments are mature, that the environment is generally supportive,
and that the author knows that it's the work being criticized, not
However, this is the same reason that inspections (formal or not) work
well - you don't like to see blemishes in your own work, so you tend to
subconsciously mitigate or ignore them.
I think some people hate to take time away from "regular" work to do
inspections. It's a bit of laborious work, and if the inspection isn't
for something that your team is working on, it kind of feels like time
is being stolen from you. This is especially true if taking time out for
inspections isn't taken into account by those in charge of schedules.
Speaking of which, is also seems that the schedulers, if they don't
take inspections into account from the very start, tend to dislike
inspections as they see it as adding development time to an already too
It's an easy way to shave time to skip, reduce, or downgrade formal
inspections. This is the same thinking, however, that would cut testing
time in order to meet a schedule. While sometimes such things do have to
be done, if it's being done based on the idea that "hopefully the
requirement/design/code doesn't have many problems with it", they are
undoubtedly wrong. Published statistics show that this is the worst type
of false savings - being penny wise and pound foolish.
As for formal vs. informal inspections, I think that depends largely on
your organization. If your inspectors prep just as hard, and play off
each others ideas just as well in an informal setting, then sure, formal
inspections are just going to waste more time without much of a benefit
(discounting keeping track of metric information).
However, since you don't have any way of knowing this for sure unless
you keep metrics for both informal and formal inspections, it's hard to
say. The only published reports I've heard of indicate that formal
inspections do find more issues, albeit at a higher cost per issue. They
also indicate that the cost difference is more than made up by savings
down the line (except possibly in code inspections).
Of course, I'd assume that most published sources are from
organizations that inspections were well implemented at, so who knows
what the returns are if they aren't so well implemented. Also, since I
just went through training again, I'm a little gung-ho on the issue, if
you can't tell... ;-)
Mike Edwards - Shoreline, WA, USA
The usual reason that they are not used is because they are not REAL
work, ie not design and code, and therefore could not possibly
contribute to the value of the product. Moreover, they get in the
way of REAL work. Most quality improvement measures, including
defined process, falls into the same category.
Alternatively, many that have tried inspections have run them in a
half-hearted way, without training or even proper briefing for
inspectors, authors, or moderators, leading to personality clashes
and everyone's time being wasted. Badly managed inspections are
probably worse than no inspections.
For organisations that do not understand quality, do not measure
quality (or rather quality indicators), do not have a commitment to
quality, do not appreciate the cost of defects, complain instead of
the cost of quality, inspections ARE a waste of time.
If you do introduce inspections, consider using them for most
internal products including plans, analysis, design and test
1.) I'm asked to inspect a design or piece of code out of context. Since I don't
really know what the thing is suppose to do I can't make intelligent comments, I
can only guess. Its a lame experience for everyone. To get the proper context
would require lots of extra work and research (perhaps a whole week).
2.) A lot of the issues that arise are opinion not fact. I'll say "This will be
hard to maintain in the future", someone else says "No, its more object oriented
the way it is!" Who's right?
3.) Even the best inspectors miss obvious things. Two weeks after a good
inspection I'll find a dumb and obvious error that the inspectors and I missed.
It makes me wonder if the process is very effective.
4.) We have big disagreements about what a design artifact looks like: some want
UML diagrams, some want text, some want CRC cards, and some want oral
5.) We haven't allocated enough time in the schedual for them. Its easy to see
them as wasting valuable time.
6.) We're collecting defect counts and rates as per PSP. Many people think PSP
is a joke and resent the bean counting.
7.) We're suppose to inspect a design or piece of code when it is done, but
they're never done. A month after an inspection I'll ask someone about thier
piece and they'll say "Oh yeah I had to rewrite it because of this and that."
Makes me feel like the inspection was a waste of time.
I think some of our problems are due to our inexperience. I'm glad to say I
don't see too many ego problems. Most of us are mature enough to admit we make
mistakes and want help finding them. I think some of the problems arise from the
slow, infrequent, and delayed feedback that formal inspections give. It often
takes two weeks to schedual an inspection, if you're doing rapid development
that's just too long.
Personally I like doing reviews in the form of pair programming as advocated by
XP. We've done this at work in a more limited context and it seems to be more
effective and more fun. Unfortunately its a very hard idea to sell. Bottom line
seems to be that the more people who look at the design or code the more bugs
My company has had great success with inspections over the last couple
years. We have a good set of guidelines and use individual inspections
instead of group inspections. All steps of the development cycle are
inspected; business analysis, specs, design docs, code, manuals, everything
gets inspected. We have checklists which correspond to the appropriate
stage in development to help catch problems early. Our inspections include
adherence to standards, style, correctness, and efficiency.
All of our inspections are done by peers within the group. Simply knowing
that someone else is going to critique your work has improved our work. It
has also kept egos out of the way. Since introducing inspections we have
been better able to meet deadlines and deliver a quality product. The
number of bugs found by our QA team and the length of their testing has been
Are inspections my favorite part of my job? Not by a long shot, but with a
streamlined process they don't take too much time and I am much more
confident in the product when it goes to QA or production. That alone makes
it worth the effort.
> To make inspection work you need:
> 1. A checklist.
> 2. Preparation document trivial issues before you walk in.
> 3. A slow metered pace that targets defects
> 4. A cycle to ensure defects are corrected.
5. Standards - for the code or doc being inspected;
6. Agreed / approved source documents;
7. Training (particularly the moderator - for process issues - and the
8. Records - both for product quality trail and for process
As the title rather implies there are different problems for different
categories of work-product.
Chris Game <chri...@bigfoot.com>
This is an issue, a good checklist will help here but you really need
to be involved in the project even if only on a peripheral basis.
> 2.) A lot of the issues that arise are opinion not fact. I'll say
> "This will be hard to maintain in the future", someone else says "No,
> its more object oriented the way it is!" Who's right?
This is an issue. Formal focuses on black and white issues. Comments
such as above might be made and curtailed quickly by the moderator.
The stylistic issues tend to flatten out to an agreed standard over
time, some differences are tolerated some are negotiated without
formal intervention. Isn't it better that these issues are dealt with
> 3.) Even the best inspectors miss obvious things. Two weeks after a
> good inspection I'll find a dumb and obvious error that the
> inspectors and I missed. It makes me wonder if the process is very
Weeks after a program went into production a bug is found. The bug is
obvious and testing should have picked it up. Do we stop testing?
> 4.) We have big disagreements about what a design artifact looks
> like: some want UML diagrams, some want text, some want CRC cards,
> and some want oral presentations.
Stylistic issues again. These issues will resolve themselves over
> 5.) We haven't allocated enough time in the schedual for them. Its
> easy to see them as wasting valuable time.
Studies prove they save time in the long run.
> 6.) We're collecting defect counts and rates as per PSP. Many people
> think PSP is a joke and resent the bean counting.
If it is used then it is useful. If they are just collecting then
> 7.) We're suppose to inspect a design or piece of code when it is
> done, but they're never done. A month after an inspection I'll ask
> someone about thier piece and they'll say "Oh yeah I had to rewrite
> it because of this and that." Makes me feel like the inspection was
> a waste of time.
I inspect every change. If it needed major rework then perhaps you
should review what you missed and add it to your checklist.
> I think some of our problems are due to our inexperience. I'm glad to
> say I don't see too many ego problems. Most of us are mature enough
> to admit we make mistakes and want help finding them. I think some of
> the problems arise from the slow, infrequent, and delayed feedback
> that formal inspections give. It often takes two weeks to schedual an
> inspection, if you're doing rapid development that's just too long.
> Personally I like doing reviews in the form of pair programming as
> advocated by XP. We've done this at work in a more limited context
> and it seems to be more effective and more fun. Unfortunately its a
> very hard idea to sell. Bottom line seems to be that the more people
> who look at the design or code the more bugs they'll find.
I put inspections as a high priority task. I take less than a day to
turn them around and expect the same in return. If you leave it too
long then people are tempted to test the programs and they are wasting
Thanks for a good summation of the problems with inspection. Hang in
there, when you loose them you will feel naked once you are used to
Okay, let's start with the simple fact that a "code review" is not an
inspection. The inspection process is very detailed (when you use a
regular process like Fegan Inspection). There are set criteria that must
be in place before an item can be inspected. There are roles that each
person in the inspection process plays. Without these roles,
responsibilities and starting documents in place there is NO inspection.
Code reviews are totally ineffective. Why? Because they focus on whether
the code does what the author intended, they don't ensure the code matches
the design, or the the design met all the requirements, or the
requirements were complete. In other words, you are validating that coding
styles were used and that there appears to be no functional defects to the
stated intent. That doesn't ensure a valid piece of software that will
meet the requirement.
Alas, you look at the design doc and "inspect" it. It came from a
requirements document, but was the requirements document inspected? If
not, then you can rest assured that defects from requirements made it into
design. Was the design document inspected against the requirements
document? Have you determined that the design document didn't make a whole
bunch of assumptions that were not in the requirements? How are you
assured that thedesign is valid and defect free? Then you get to inspect
the QA Test plan against the requirements. You do ensure that the test
plan is written to the requirements document, not to the design or code,
I'm really quite glad that you think you do inspection, but chances are
all you are doing is reading the document and validating that it looks
defect free based on what it says, disconnected from the process of
creating it. You aren't doing an inspection. You are doing a document
review, and that is almost as big of a waste of time as a code
review. None of this stuff is created in a vaccume and to review it in
that way makes the entire process nothing more than a feel good for those
While I cut your initial part, which talked about automating the process
with code, that's great if all you are doing is looking at coding
style. You can even do that if you are looking for things like
uninitialized variables, etc. But in an inspection you are looking for
DEFECTS and those are generally typed as major or minor. The ones that
tools will catch are "minor" defects. Major defects are instances where
the programmer added a new feature that wasn't in the design (and may have
been a defect of the design, but the answer is to go back and fix the
design, not add a potential trouble spot into the code) or didn't
effectivly cover all the design. You inspect CODE to DESIGN and that can't
really be automated as one is generally paragrpah and text and the other
is logic flows that represent those paragraphs of text.
Stop right there. You can't "inspect" an item without looking at the items
that generate it. If you are inspecting code and not inspecting it bach to
the requirements, then you aren't inspecting it, your doing a code
review (which is a waste of time). All the rest of your points fall under
this same answer, you are doing code reviews, not inspections.
Inspections start at the begining. The requirements. To inspect something
liike code that was generated from something like a design doc that wasn't
inspected just validates that you coded to the defects that you didn't
catch to begin with.
> 7.) We're suppose to inspect a design or piece of code when it is done, but
> they're never done. A month after an inspection I'll ask someone about thier
> piece and they'll say "Oh yeah I had to rewrite it because of this and that."
> Makes me feel like the inspection was a waste of time.
Once an items has been inspected, it is DONE. That's it. If there is a
need to recode the item then it MUST go through inspection again.
> I think some of our problems are due to our inexperience.
BINGO! May I suggest you do a very small project with a complete
inspection process. I personally recomend Fegan as I am a trained Fegan
moderator, but you can pick the one that best suits your needs. Once you
have done a complete project with the process you will understand the
value of doing all the projects that way. Once you see the first round of
"maintenance" or "enhancement" to something that has been through a
complete inspection, you will truly see the value of the process.
Don't believe it? Item 7 above is about rework. rework happens because
defects weren't caught early enough. If the code pass inspection to the
design, and the design passes inspection to the requirements, then the
chances of rework drop dramatically. The same is true for maintenance.
BINGO! You do it right.
> Since introducing inspections we have
> been better able to meet deadlines and deliver a quality product. The
> number of bugs found by our QA team and the length of their testing has been
> reduced significantly.
Whao, defects caught early reduce the cost of QA testing, and the rework
of catching them later in the process. That means that the development
cycle actually DECREASES when inspections are done properly.
Conside the following, a defect is created in the requirements document
and then is "caught" as a bug in the code in QA. To correct this defect
properly there has to be a review of the requirements, which will then
have to be modified. This will necessitate a change in the design that may
effect a number of pieces of code. Each piece of code will have to be
modified to ensure that the design changes were caught, which may create
additional new defects in all the code that had to be changed. Which now
all has to be re-inspected.
WOW! That could get expensive if you only caught a few defects in the
QA test phase. But when was the last time you saw a large project with
just "a few defects"? :-)
If you look at the context of this thread you will see that my comments
were in the context of how I *feel* about inspecting documents written
for human consumption versus code written for compiler consumption. I
was not making any kind of statement about whether or not to do formal
I don't want to get into an argument about semantics. Since my paranthetical
statement states I am not talking about formal inspections I'd hoped it
would be clear that my usage of the word 'inspection' should be interpretted as
per your favorite dictionary.
I largely agree with your comments but they are out of context.
I don't agree with your assertion that reviewing documents is a 'waste
of time'. It has unfortunately been my experience that the folks involved
in developing requirements are reluctant at best at being responsible for
the task. So it has been my experience that you have to closely *review*
their work and prod them like hell if you want any kind of a complete
document that can be used going forward. If you don't it is the exercise
of bothering to write a lame requirement that is a waste of time. In that
context I don't think reviews are a waste of time and its assinine to think
that the org. is mature enough to use formal inspections instead.
> While I cut your initial part, which talked about automating the process
> with code, that's great if all you are doing is looking at coding
> style. You can even do that if you are looking for things like
> uninitialized variables, etc. But in an inspection you are looking for
> DEFECTS and those are generally typed as major or minor. The ones that
> tools will catch are "minor" defects. Major defects are instances where
> the programmer added a new feature that wasn't in the design (and may have
> been a defect of the design, but the answer is to go back and fix the
> design, not add a potential trouble spot into the code) or didn't
> effectivly cover all the design. You inspect CODE to DESIGN and that can't
> really be automated as one is generally paragrpah and text and the other
> is logic flows that represent those paragraphs of text.
Not surprisingly you are missing the context of the thread. When I am asked
to inspect code I *feel* like I want to automate it. I'm not recommending
that it be done that way.
There are automated tools to locate problems in code, both statically and at
run time. Tools like PC-Lint, Code Wizard, BoundsChecker, Purify, etc., but
what percentage of developers use even those tools?
I agree, a review is not a waste of time. In order to make the review
work you have state that you need to look at it closely, this borders
on inspection without the formality.
There are some (many) definitions for inspections, here are the ones I
walk though - informal browse of the code
review - high level presentation to users and peers not directly
Inspection - a careful detailed look at the product without paperwork.
Formal inspection - as per above with a lot more focus on extracting
defects, using checklists to ensure you have not missed anything,
using a log to ensure that all issues are resolved and not lost.
Inspections can be very effective, they must be approached in the
right way however. The 'duty of care' as lawyers put it must be
foremost in your mind. Most junior (less than 5 years experience) do
not fully understand this. A lot of senior ones appear to have missed
it as well.
I hate it when Ken and I disagree...
> There are some (many) definitions for inspections, here are the ones I
> work with;
> walk though - informal browse of the code
Use computer based syntax validators. It's faster and better than a "walk
> review - high level presentation to users and peers not directly
For the purpose of? Unconnected to the pre-process items (which should
have been inspected) the review has no real function. If connected to the
items that have come before, how does this differ from an
inspection? Isn't the goal of this process to generate a list of
defects. Isn't the goal to use objective criteria to find those
defects? What benefit does this have over that of inspection?
> Inspection - a careful detailed look at the product without paperwork.
Checking of more than syntax, form and structure? What is being validated
by this process? That the code does what the comments inside say it
does? The power of an inspection is the power of the process. That process
has specific roles and rules. The roles allow for people to stand outside
their normal point of view, this forces them to actually inspect the item,
not just give it a glance over. The rules are there to ensure that the
process is *not* subjective.
In most inspections that are not run with a well defined set of rules and
roles you see the inspections fail as people do a minimal review of the
item to be inspected, and then make comments that are based on subjective
criteria. A defect is *NOT* subjective. The measurement (major/minor) may
be subjective in early inspections as the teams create the rules for
what consitutes a major or minor defect, but what is a defect is *not*
> Formal inspection - as per above with a lot more focus on extracting
> defects, using checklists to ensure you have not missed anything,
> using a log to ensure that all issues are resolved and not lost.
Every inspection should be focused on locating defects, logging defects
and scaling the defect. Part of every inspection should also include a
session where defect analysis is performed. Was this defect put in through
a defect in the development process? Defects don't happen in a
vaccume. Defects are based on either sloppy work, or sloppy processes. The
goal of the inspection is to ensure that both are caught. The goal of the
defect analysis portion of the inspection process is to ensure that the
development process is made better, so there will be less chance of a
defect in the future.
If a company is doing inspection to make sure the code is defect free, and
not doing defect analysis, they are missing the greatest benefit from the
inspection process, that of CANI (Constant And Never-ending
Improvement). If there is no defect analysis then there is no learning
process from our mistakes, and if we fail to learn from them we are bound
to repeat them.
[yes, the CANI in canitech comes from the above belief system. We all
have processes in our "life" that we do. Some are more successful than
others. We can learn from the success, and from the failures (defects), to
create better processes. Inspections are not just a theory that has to do
with software development, but has to do with all parts of any process
based organism, whether it is a person, a company or the development of a
piece of software.]
J.M. Ivler <iv...@net-quest.com> wrote in message
Inspection (and some would say "formal inspection") requires that each
item be defect free before the next part of the process start. How can you
be assured that the design of your system is correct if you don't have
complete and defect free requirement specifications?
Now, some will say that if I get some specification I can start on the
design and when the requirements are completed, then I can validate my
design to the completed and inspected defect free requirements. This
overlap of processes is possible, if one is willing to redo work that had
been designed to a requirement that was then found to be defective and had
to be restructured. In other words, what appears to be a potential cost
savings can in fact be far costlier.
If the process adopted is to develop a defect free requirements
specification, then when the design is done it will have to be done once,
to that defect free specification. The development team can then validate
the design against the requirements specification, as well as standards
that have been deemed appropriate for the design exit criteria. Once the
design has exited defect free, then coding can commence.
While the design process is going on, there is also a team developing a
testing specification to the same defect free requirments
specification. Since the test plan has been developed to validate and test
the requirements, not the design, the possibility that the product will
work to spec is much higher.
While coding is going on portions of the test plan can be used to validate
that the code works to the requirements specification. Once the code has
been inspected and is defect free the entire inspected test suite is run
against the application to ensure that it performs defect free to spec.
In most shops the test plan is written by the same people that write the
code (I know how best to test my own software). This is even true in
ISO9000 shops (document the practice, as long as the process is documented
it doesn't matter if it's good). In the end the test plan tests that the
code operates functionally as it was prrogrammed to do. In some cases the
test plan might actually be written to the design document (inspected or
not). This only validates that the code functions as designed, not as
> Anyway, I devote part of my code review to reverse engineering a Software
> Design Spec from the code, and most often it is far better than the one the
> programmers were using.
But does that mean the code they generated is defect free, and/or, that it
meets the requirements? While it's wonderful that you have the talent and
ability to create a design spec from a code component, are you sure that
the code operates as was planned, per the requirements spec?
> You also said "the review has no real function"
> I have always found errors duing this process, mostly 'minor' (ie cheap to
> fix, but needing fixing)
A defect is nothing in some industries. In others it can cause a
multi-million dollar spaceship to disapear. In one of the industries I
worked in it could cause a plane loaded with passengers, like my mom, or
my wife and kids, to go down. Imagine an MD-11 belly tank transferring
fuel to the wet tail and back again. This process happens to ensure that
while the fuel flows, the system maintains the stability of the flight as
well (weight transference). What if there was a simple error in this
code? What if, when a sensor went out, the fuel system started to try to
manage the flow and caused the system to start the pumps one way, over
pump, and then start the pumps the other way creating an oscolation of
the plane that like an expanding wave got worse with each switch of the
flow. One of my greatest nightmares was seeing this reported in the Risks
I didn't come to inspection because it made my job easier. It made it
possible for me to sleep easier at night knowing that my family and
families like mine were riding on software that was defect free. And I
stayed with the inspection process after the fact because I saw it save
millions of dollars in software development (even before there was OO it
helped in developing reuse libraries).
I am sorry but this is bullshit.
1. NO inspection (formal or otherwise) can give you a defect free
system. It is based on people who make mistakes same way as the
original documentor or programmer makes mistakes. Requirements are
sometimes so hidden that you cannot dig them out.
2. You may choose to proceed to the next phase despite the known
problems because the deadline or finances or whatever tells you to.
I fully support inspections, they have steered me a true course so
many times and when they are not done my project typically have far
more problems. 95% defect free release production system is a very
good result, 80% is the typical result from non-inpsecting projects.
If you want to go extreme on inspections (i.e. medical) then 99% is
achievable and expensive.
If you shoot to be good, you will be average. If you shoot to be great you
will be good. If you shoot to be outstanding you will be great. If you
shoot to be perfect you will be outstanding.
Nothing is ever 100%, but if you start by saying that you are willing to
accept 95%, then you have already compromised your ability to ever get
close to 100%. And, by shaving a percentage point here or there you can
sooner or later justify that 90% is just as good because its so much
better than 80%.
The process isn't perfect. I have held a requirements document that we
have had to go back and change even though we thought it was 100% defect
free, because we found defects while in design. But, because we had such a
high standard of being defect free we generated code that I would bet my
life and my families life on.
> 1. NO inspection (formal or otherwise) can give you a defect free
So you accept what percentage of defects in a life support system that you
may be hooked up to as acceptable? How about in an aircraft your about to
ride across the ocean in?
> I fully support inspections, they have steered me a true course so
> many times and when they are not done my project typically have far
> more problems. 95% defect free release production system is a very
> good result, 80% is the typical result from non-inpsecting projects.
> If you want to go extreme on inspections (i.e. medical) then 99% is
> achievable and expensive.
Why shoot for 95%? Why start with expectations that say that it's okay to
have a 5% failure? When you start an inspection, as a moderator, do you
start by saying "okay folks, I want to get 95% of the defects." or do you
say "we are here to ensure that this item is defect free."? If you aren't
going to give 100%, then how the hell do you expect to get 100%? Pure
luck? It isn't even close to achievable if you don't seek to achieve it!
Sorry Ken, we disagree here. I don't accept 95%, and the next time you get
in a plane that was built by Douglas after 1992, remember that not one of
the Fegan Moderators believed in 95% defect free as an acceptable standard
for software that sat on that plane.
concerning you statements
> Okay, let's start with the simple fact that a "code review" is not an inspection.
> Code reviews are totally ineffective...
I completely disagree. Experience from practise show that reviews are a
very effective way for defect detection. Actually, they are a special
form of inspections ! Therefore an inspection is a "formal review". Read
the book "Software Inspection" from Tom Gilb, Dorothy Graham and
Susannah Finzi. Or take a look in the book of books about testing from
Myers. Also Balzert (German author) describes inspections as "formal
reviews". Where do you get the idea to assert that a review is not an
inspection despite the fact that the whole world has a different point
of view ?
"J.M. Ivler" wrote:
> If you shoot to be good, you will be average. If you shoot to be great you
> will be good. If you shoot to be outstanding you will be great. If you
> shoot to be perfect you will be outstanding.
> Nothing is ever 100%, but if you start by saying that you are willing to
> accept 95%, then you have already compromised your ability to ever get
> close to 100%. And, by shaving a percentage point here or there you can
> sooner or later justify that 90% is just as good because its so much
> better than 80%.
In most of the world economics plays a large role. An optimal solution
in most situations from the perspective of the company is almost always
well short of perfection. If you build a perfect word processor you
won't sell one because nobody will be able to afford the damn thing.
Unless of course you are willing to sell it at a loss.
The point is that you can't make these broad statements like they apply
to everybody. They might apply to you. I hope they apply to the folks
building software to control aircraft, space shuttles and life support
systems. But that is a small fraction of the SE world.
> concerning you statements
> > Okay, let's start with the simple fact that a "code review" is not an inspection.
> > Code reviews are totally ineffective...
> I completely disagree. Experience from practise show that reviews are a
> very effective way for defect detection.
Define defect detection. If you are saying that a code review can spot bad
programming practices, bad software dvelopment and bad syntax or logic
errors, I won't disagree. But it doesn't come close to being able to allow
you to create defect free software. The time spent in review follows the
80/20 rule. You will catch 80% (the gross defects), but will miss the
20%. Some of these will show up in testing. If the test plan is not
written by the person creating the code this number will be greater than
if the coder writes the tests. The cost for fixing the defect caught in
test is more expensive than if it had been caught in code.
But what if the defect caught in test was generated in the initial
requirements? What is the cost of fixiing the defect at the test point,
rather than catching and fixing the defect when it originated?
Code reviews are a waste of tiome because the 20% of the defects that get
through are far too costly.
> Where do you get the idea to assert that a review is not an
> inspection despite the fact that the whole world has a different point
> of view ?
The whole world? I think not. I know that I was trained as a Fegan
Moderator and in that training, way back in what '91 I guess, there was
even a slide that said "Inspection != Review". Michael was teaching that
then, and I have yet to hear it to be taught differently.
> If you build a perfect word processor you
> won't sell one because nobody will be able to afford the damn thing.
> Unless of course you are willing to sell it at a loss.
If you were to build a car like Microsoft builds their operating system,
would you buy and use it? Microsoft is trying to get share in the server
marketplace, but it can't because people have linux servers that they have
up and running to 200+ days and no Microsoft product has that level of
quality and reliability. Yes Microsoft florishes, and we all pay for it
with the "blue screen of death" and other fine features of a shit quality
product. Can you get away with building crap? Yep. Can you become
successful with that strategy? Yep.
But watch out or someday someone might just beat your butt with quality
(not everyone wants to pay to drive a quality car, that's why Kia has car
sales in America - some people are always willing to settle for 80%).
> The point is that you can't make these broad statements like they apply
> to everybody. They might apply to you. I hope they apply to the folks
> building software to control aircraft, space shuttles and life support
> systems. But that is a small fraction of the SE world.
As Microsoft has proved so well.
And how many sites do you return to that give you "404-Not Found" errors?
When given a choice, quality will win, but as you pointed out, it maters
how much it costs for that quality. The WSJ charges people to access their
web-site. Not everyone pays for that. The people that want that quality in
reporting and news do.
> Inspection (and some would say "formal inspection") requires that each
> item be defect free before the next part of the process start. How can you
> be assured that the design of your system is correct if you don't have
> complete and defect free requirement specifications?
Looks hopelessly unrealistic to me, in fact I thought Defect Free was
recognised as impossible.
>Francis <nosp...@cwcom.net> wrote:
>> I do not know what environment you work in, especially when
>> you say "the pre-process items (which should have been
>> inspected) ". Do you mean that you really expect that the
>> preceding step of the lifecylce should have actually been
>Inspection (and some would say "formal inspection") requires
>that each item be defect free before the next part of the
>process start. How can you be assured that the design of your
>system is correct if you don't have complete and defect free
Not necessarily. Read Tom Gilb's Principles of Software
ENgineering Management or Software Inspections. Although he
requires that an item be defect free before starting the next
stage, this does not necessarily mean complete. Or, rather, it
only needs to be complete within the bounds of what you are
trying to achieve in the current evolutionary stage.
>Now, some will say that if I get some specification I can
>start on the design and when the requirements are completed,
>then I can validate my design to the completed and inspected
>defect free requirements. This overlap of processes is
>possible, if one is willing to redo work that had been
>designed to a requirement that was then found to be defective
>and had to be restructured. In other words, what appears to
>be a potential cost savings can in fact be far costlier.
But the risks are greatly reduced if sensible bounding is
applied at all points. We have fought for years to get away
from the Waterfall, don't throw all that awa