Code inspections

4 views
Skip to first unread message

Mark A. Swanson

unread,
Jan 28, 1991, 10:58:44 AM1/28/91
to
In practice we have not found programmer's egos to be a major problem
to properly conducted Code Inspections. This, of course, assumes that
the Inspection process is actually following the defined cookbook approach,
complete with moderator who keeps the discussion on track and non personal
and a seperate reader who actually goes through the code (or design document:
Inspections work well for them as well) one piece at a time. In addition, it
is absolutely forbidden for someone's manager to help inspect his product or
to use the # of defects found by an inspection as part of performance rating.

It helps sociologically, I suspect, if the first few pieces of code inspected
are from the senior technical people. (I have certainly found inspections
useful.)

The major problem is in scheduling if the process model does not include
inspections. They do take time and there are limits to how many anyone
can go through per week (about 2 max, I think.) This tends to make Inspections
a major time block on the project pert chart (even if broken up by area) and
therefore they are very hard to add in to an existing schedule.

The problems are all solvable, but it requires full project and technical
management support to introduce this or any other significant innovation
that changes how one develops software. If ego problems are blocking
inspections, then one isn't running inspections right.

Mark A Swanson
Senior Principal Engineer
GenRad, Concord, MA
m...@genrad.com

Patrick Powers

unread,
Jan 27, 1991, 7:35:20 PM1/27/91
to
It seems that the problem with code inspections is largely emotional.
Though there is plenty of evidence that code inspections are cost
effective, I believe they would tend to be boring and stressful.
Boring because they are a time consuming and non-creative activity --
current issue of IEEE Software recommends 150 lines of code reviewed
per man-day as a good figure. I know I would not want to do this, and
who would? Stressful because it is out of the programmer's control,
and because criticism is involved. People identify closely with their
creations and find criticism painful.

Not only that, but your average programmer was very likely attracted to
programming in order to avoid social interaction and to create
something under his/her personal control without anyone else watching.
He/she is likely to be on the low end of the social tact scale and
singularly unqualified to deal with this delicate situation. Again,
this may very well have attracted them to programming: it doesn't
matter whether anyone likes their personality, all that counts is
whether the program works.

In order to reduce these problems the following has been suggested:
1) The author not be present at the inspection
2) Only errors are communicated to the author. No criticism of style allowed.

I've toyed with the idea of instituting code inspections but just
couldn't bear to be the instrument of a good deal of unhappiness. It
seems to me that it could work with programmers directly out of college
who feel in need of guidance. It also might succeed in a large
paternalistic organization as these would be more likely to attract
group oriented engineers. Note that the classic studies of code
inspection occurred at mammoth IBM.

In spite of all this, I think code inspections would be accepted in any
application where there is a clear need such as the space shuttle
program where reliability is crucial and interfaces are complex. In
such cases code inspections are clearly a necessity, and engineers
might welcome --or at least, tolerate -- them as essential to getting
the job done. On the other hand in routine applications with a good
deal of boiler plate code they could be a "real drag", exacerbating the
humdrum nature of the task.

--
--

Dick Dunn

unread,
Jan 28, 1991, 5:52:31 PM1/28/91
to
p...@megatest.UUCP (Patrick Powers) writes:
...

> Though there is plenty of evidence that code inspections are cost
> effective, I believe they would tend to be boring and stressful.
> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure...

Well, we all know that lines of code is a lousy measure of anything except
the number of newlines (don't we?:-), but still, if this measure is any-
where close to real, it's a much stronger argument that Powers suggests
against code inspections. A halfway-decent programmer can produce several
times that 150 l/d figure...proceeding through anything at 20 lines/hour
(that's 3 minutes per line, effectively???) is too slow to feel productive.

> ...Stressful because it is out of the programmer's control,


> and because criticism is involved. People identify closely with their
> creations and find criticism painful.

Criticism may be somewhat painful inherently, but again I'm going to speak
about a "halfway-decent programmer" and say that such a person has long ago
transcended deriving personal injury from criticism of the code. Good
grief, the *compiler* picks your code apart early on. There are enough
opportunities to confront one's human frailty and fallibility in a day of
programming that I don't think this holds water. Sure, there are prima
donnas and cowboy kids in the programming world, but they're not in the
mainstream. Train 'em to accept criticism or get rid of 'em!

In my experience, when I hear a programmer (of the type I know/respect and
have been around for years) who's looking at code say something like
"That's idiotic! That's absurd! There's no way in hell that could
possibly work, you bozo!" - it's 95% certain he's talking about his own
code.

> Not only that, but your average programmer was very likely attracted to
> programming in order to avoid social interaction and to create
> something under his/her personal control without anyone else watching.

This is a fun thing when we joke about it, but it's pretty crappy to
pretend that it's serious. I think the average programmer was attracted to
programming either because of the $ or because programming is fun/inter-
esting. Sometimes, long stints at the terminal leave you without much
social interaction for a while, so it's at least plausible to hypothesize
that "the average programmer" can handle a low level of social interaction.
That doesn't mean it's sought out. Don't confuse correlation with
causality.

> He/she is likely to be on the low end of the social tact scale and

> singularly unqualified to deal with this delicate situation...

If you've got a bunch of low-tact people, the situation isn't delicate!

> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection

That means any minor question will have to be transcribed instead of being
answered on the spot. It eliminates useful feedback.

Also, if you've got the class of delicate egotists you've described, it
means the author will be fretting about what people are saying about his
precious code behind his back.

> 2) Only errors are communicated to the author. No criticism of style allowed.

Huh? I don't want to put words in your mouth, but this sounds like either
style isn't important enough to criticize, or at the least, that style
takes a back seat to coddling the egos.

> I've toyed with the idea of instituting code inspections but just

> couldn't bear to be the instrument of a good deal of unhappiness...

Instead of "instituting" them, why not simply allow them to happen. As you
note later in the article, there are some cases where going over particular
code is a Very Good Idea, and other cases where it's Massively Boring and
Useless. So let people figure out when they need to go over the code, and
at what level. (Sometimes you want to go over the high-level--the data
structures, the general breakout. Once in a while you really want to go
over each line of a small section in excruciating detail.)

Think of it this way: Code inspection is a tool. You don't use every tool
for every job.
--
Dick Dunn r...@ico.isc.com -or- ico!rcd Boulder, CO (303)449-2870
...Mr. Natural says, "Use the right tool for the job."

r...@crl.labs.tek.com

unread,
Jan 28, 1991, 6:10:17 PM1/28/91
to
In article <14...@megatest.UUCP>, p...@megatest.UUCP (Patrick Powers)
writes:

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.
>He/she is likely to be on the low end of the social tact scale and
>singularly unqualified to deal with this delicate situation. Again,
>this may very well have attracted them to programming: it doesn't
>matter whether anyone likes their personality, all that counts is
>whether the program works.

I think this is really ridiculous. Pigeonholing the "average" programmer
as some unprofessional, nerd-dweeb is a little out there. If a person is
unable to perform professionally because of social/emotional problems
then perhaps they are in the wrong profession. I don't think this describes
today's "average" programmer at all.

In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:
> In practice we have not found programmer's egos to be a major problem
> to properly conducted Code Inspections. This, of course, assumes that
> the Inspection process is actually following the defined cookbook approach,
> complete with moderator who keeps the discussion on track and non personal
> and a seperate reader who actually goes through the code (or design document:
> Inspections work well for them as well) one piece at a time. In addition, it
> is absolutely forbidden for someone's manager to help inspect his product or
> to use the # of defects found by an inspection as part of performance rating.
>

I just don't see this as being realistic. What other job where a product
(software in this case) is produced do you find people not being judged on
the quality of their output? People absolutely have to be judged on how well
they perform their jobs and if putting out high quality software is their
job then their manager should be able to measure their job performance and
react accordingly.

I find the attitude expressed in the above two postings very disturbing. They
seem to imply that people involved in software production somehow need to be
treated very differently from people involved in other types of production.
You aren't allowed to measure their work, you can't judge their work, you
can't evaluate their job performance based on their work. This kind of
thinking is what is keeping software from becoming a true engineering
discipline and it is why so much software is so bad. Programmers need to
realize that their job is producing high quality software, not just a
program that "works". They need to be held accountable for their work.
I have found that true professionals don't mind code inspections and
walkthroughs, because they are confident in their ability and proud of
their work.


Disclaimer: This is my opinion only. Tektronix may not share my opinion.


Jim Nusbaum, Computer Research Lab, Tektronix, Inc.
[ucbvax,decvax,allegra,uw-beaver,hplabs]!tektronix!crl!rjn
r...@crl.labs.tek.com
(503) 627-4612

michael vincen conca

unread,
Jan 28, 1991, 10:00:49 PM1/28/91
to
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:

> It seems to me that it could work with programmers directly out of college
> who feel in need of guidance. It also might succeed in a large
> paternalistic organization as these would be more likely to attract
> group oriented engineers. Note that the classic studies of code
> inspection occurred at mammoth IBM.
>

Speaking as both a programmer who is in college and a programmer who is in
the business world and has gone through numerous code inspections, I would
have to say that they are a novice programmer's worst nightmare.

I agree that there is a feeling of needing some guidance when you first start,
but a code inspection is a difficult place to get it. Generally, people
review your code looking for logic errors and the like, and usually they will
find a lot more in yours than in the other programmers code. Of course, this
is to be expected since you haven't had much experience and the others
know what has been done in the past and what the general operating procedures
are.

While there is nothing wrong with this, it may leave the inexperienced pro-
grammer with a feeling of inadequacy. If the code inspection is handled
poorly or is particularly harsh (in the eyes of the novice), it could leave
him/her feeling incompetent and questioning their abilities or education.

One of the hardest things that I found to do was to review the code of senior
programmers. This may sound silly, but when you get your first programming
job it isn't exactly easy to tell the person who hired you that they might
have done something wrong. In a worst case senario, there may be some
grizzled programmer on the team who is unwilling to admit that a new kid
might know something they didn't. Of course, any programmer who is unwilling
to accept new ideas is much of a programmer.

-=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=-
Mike Conca, Computer Science Dept. * co...@handel.cs.colostate.edu
Colorado State University * co...@129.82.102.32
"Everyday, as the network becomes larger, the world becomes smaller."

Christopher Lott

unread,
Jan 29, 1991, 11:15:24 AM1/29/91
to
In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:
>> In addition, it
>> is absolutely forbidden for someone's manager to help inspect his product or
>> to use the # of defects found by an inspection as part of performance rating.
>>

<73...@tekchips.LABS.TEK.COM> r...@crl.labs.tek.com (Jim Nusbaum) replies:


>
>I just don't see this as being realistic. What other job where a product
>(software in this case) is produced do you find people not being judged on
>the quality of their output?

I think Mr. Nusbaum misses the point slightly. A manager or group
leader who introduces code inspections into a s/w environment must
assure the group that "number of faults found during inspections"
will not suddenly dominate performance appraisals.

Of course people must be judged on the quality of their work - but
define software quality for me ;-)
Number of faults detected in inspections is important, but beware
of attaching too much importance to this figure.

Joe may make silly logic errors, but his skill at design and spotting
problems early are invaluable. You don't want to stifle such a person's
abilities. Likewise, if detecting faults in another person's work directly
results in negative performance ratings, think of the chilling effect this
could have on peer review in a friendly group - or the inflammatory opposite
result among antagonists.

chris...
--
Christopher Lott Dept of Comp Sci, Univ of Maryland, College Park, MD 20742
c...@cs.umd.edu 4122 AV Williams Bldg 301-405-2721 <standard disclaimers>

bernard.w.fecht

unread,
Jan 29, 1991, 1:32:27 PM1/29/91
to
In article <29...@mimsy.umd.edu> c...@tove.cs.umd.edu (Christopher Lott) writes:
>
>Of course people must be judged on the quality of their work - but
>define software quality for me ;-)

It meets customer expectations.

But I think I agree with Chris's answer. Certainly a programmer needs
to be evaluated based on their output, but process meters cannot be the
sole source for evaluating the programmer or else the process won't
be followed (or it may be "tricked" into showing things that are not
real.) Some would even say that NO process meters should be used, which
may be true. I'm sure there are plenty of indicators available that
have nothing to do with the "process".

Another argument is that process meters usually show "faults leaked
from one stage to another." These indicators evaluate the process
and the team's ability to run it. No single programmer should be held
accountable for a buggy module in the field -- many have been involved
in getting that module out there. Even the most obvious indicator
to me, ie. maintainability, is a team job and is not likely to be the
result of one person's work.

Bill Wagner

unread,
Jan 29, 1991, 3:34:21 PM1/29/91
to
In article <73...@tekchips.LABS.TEK.COM> r...@crl.labs.tek.com (Jim Nusbaum) writes:
>[reference from earlier post deleted]

>I just don't see this as being realistic. What other job where a product
>(software in this case) is produced do you find people not being judged on
>the quality of their output? People absolutely have to be judged on how well
>they perform their jobs and if putting out high quality software is their
>job then their manager should be able to measure their job performance and
>react accordingly.
>
>I find the attitude expressed in the above two postings very disturbing. They
>seem to imply that people involved in software production somehow need to be
>treated very differently from people involved in other types of production.
>You aren't allowed to measure their work, you can't judge their work, you
>can't evaluate their job performance based on their work. This kind of
>thinking is what is keeping software from becoming a true engineering
>discipline and it is why so much software is so bad. Programmers need to
>realize that their job is producing high quality software, not just a
>program that "works". They need to be held accountable for their work.
>I have found that true professionals don't mind code inspections and
>walkthroughs, because they are confident in their ability and proud of
>their work.
>

Your post seems to be combining two justifications for code inspections
into one. When I have had my code inspected, it was to happen before
any testing of the code took place, (by definition, after the first
clean compile). The justification for that was that other experienced
programmers could spot errors and suggest small - scale design improvements
before testing occurred. (Major design suggestions should have already
been received at design reviews). The end results were:
1. smaller, faster code
2. less time spent debugging.

1. is a little tough to justify, but I believe it based on changes I
made or suggested.

2. is easy to justify. Any logic errors found before testing begins
is less time spent in the debugger, and in testing - > fixing -> re-
testing cycle.

Now, if you wish to examine my (or anyone else's code) for
judging the quality of work, I'd rather it was done after the
testing cycle. That way, you are judging the completed code,
as opposed to a first draft.

I agree that programmer's need to accept responsibility for the
quality of their work, but forced examinations aren't the best
way for that to happen. The idea of a review (whether it be a
design review, or a code review) is to allow peers the oportunity
to comment on the proposed solution to a technical problem. The
reviews should remain just that. Now, if you wish to measure a
programmer's performance based on the quality of code that the
programmer has agreed is ready to be released, that is another
matter entirely.


--
Bill Wagner USPS net: Cimage Corporation
Internet: wa...@cimage.com 3885 Research Park Dr.
AT&Tnet: (313)-761-6523 Ann Arbor MI 48108
FaxNet: (313)-761-6551

Barry Kurtz

unread,
Jan 29, 1991, 10:55:48 AM1/29/91
to
We've used code inspections with great success. All of our code must be
inspected by a group of our peers prior to module integration and
execution. This has greatly reduced bugs in our code and improved the
uniformity and integration of code modules. I highly recommend code
inspections for any serious software project involving a team of
developers.


Barry Kurtz
Hewlett-Packard

These comments are my own and do not necessarily reflect the opinions of
my company.

Brad Cox

unread,
Jan 29, 1991, 5:06:33 PM1/29/91
to
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.

While not in any way arguing against your point, or about the utility of
code inspections in general, isn't it about time that we started breaking
our enfatuation with the *process* of building software (source code,
style rules, programming language, lifecycle, methodology, software development
process, CASE, etc, etc) and started concentrating on the *product*
itself.

To me, the paradigm shift that we're facing is figuring out how to comprehend
software products, which unlike manufactured things like firearm parts,
are intangible...undetectable by the natural senses.

I envision tools to assist in understanding the static and dynamic properties
of a piece of code the way physicists study the universe, not by asking
how it was built (a process question), but by putting it under test to
determine what it does.

Consider two views of a stack class. The conventional view leads us to
ask what language it was written in, and perhaps read the source to see
what it does.

I'm proposing another view from the *outside*. This view ignores the process
whereby it was constructed. This involves specifying its static (does it
provide methods named push and pop?) and dynamic properties (does pushing
1,2,3 cause pop to return 3,2,1?)

Again, I'm not arguing against the need for a white-box view; only in
favor of a belts and suspenders approach in which we also beef up our
tools for capturing and reasoning about black-box information.
--

Brad Cox; c...@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

Jon Kutemeier

unread,
Jan 29, 1991, 12:09:44 PM1/29/91
to
Distribution:

In article <73...@tekchips.LABS.TEK.COM> r...@crl.labs.tek.com writes:

Path: motcid!uunet!zephyr.ens.tek.com!tekchips!snowbird!rjn
From: r...@crl.labs.tek.com
Newsgroups: comp.software-eng
Keywords: inspection, software engineering
Date: 28 Jan 91 23:10:17 GMT
References: <14...@megatest.UUCP> <40...@genrad.UUCP>
Sender: ne...@tekchips.LABS.TEK.COM
Reply-To: r...@crl.labs.tek.com (Jim Nusbaum)
Lines: 56

In article <14...@megatest.UUCP>, p...@megatest.UUCP (Patrick Powers)
writes:

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.
>He/she is likely to be on the low end of the social tact scale and
>singularly unqualified to deal with this delicate situation. Again,
>this may very well have attracted them to programming: it doesn't
>matter whether anyone likes their personality, all that counts is
>whether the program works.

I think this is really ridiculous. Pigeonholing the "average" programmer


as some unprofessional, nerd-dweeb is a little out there. If a person is
unable to perform professionally because of social/emotional problems
then perhaps they are in the wrong profession. I don't think this describes
today's "average" programmer at all.

I tend to agree.

In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:

> In practice we have not found programmer's egos to be a major problem
> to properly conducted Code Inspections. This, of course, assumes that
> the Inspection process is actually following the defined cookbook approach,
> complete with moderator who keeps the discussion on track and non personal
> and a seperate reader who actually goes through the code (or design document:
> Inspections work well for them as well) one piece at a time. In addition, it
> is absolutely forbidden for someone's manager to help inspect his product or
> to use the # of defects found by an inspection as part of performance rating.
>

I just don't see this as being realistic. What other job where a product


(software in this case) is produced do you find people not being judged on
the quality of their output? People absolutely have to be judged on how well
they perform their jobs and if putting out high quality software is their
job then their manager should be able to measure their job performance and
react accordingly.

I find the attitude expressed in the above two postings very disturbing. They
seem to imply that people involved in software production somehow need to be
treated very differently from people involved in other types of production.
You aren't allowed to measure their work, you can't judge their work, you
can't evaluate their job performance based on their work. This kind of
thinking is what is keeping software from becoming a true engineering
discipline and it is why so much software is so bad. Programmers need to
realize that their job is producing high quality software, not just a
program that "works". They need to be held accountable for their work.
I have found that true professionals don't mind code inspections and
walkthroughs, because they are confident in their ability and proud of
their work.

Unfortunately, this a problem. Since the coding of software can take
many different forms, how do you judge "quality"? What one person
perceives as a higher "quality" of code may seem like a lower "quality"
of code to another. Right now, there is no one correct way to write a
program, unlike other engineering diciplines, which may have a single
answer (Does this bridge support X lbs of weight? A simplified example...).
How to define quality for software is still nebulous right now.
Do you base it on how well the program works? How efficiently it runs?
How well it is commented? All the above? Quality will mean different
things to different people, depending upon what their needs are.
That is why there is concern over rating programmers based on the
"quality" of their code.


Disclaimer: This is my opinion only. Tektronix may not share my opinion.

Jim Nusbaum, Computer Research Lab, Tektronix, Inc.
[ucbvax,decvax,allegra,uw-beaver,hplabs]!tektronix!crl!rjn
r...@crl.labs.tek.com
(503) 627-4612


Jon Kutemeier__________________________________________________________________
-----------------Software Engineer /XX\/XX\ phone:(708) 632-5433
Motorola Inc. Radio Telephone Systems Group ///\XX/\\\ fax: (708) 632-4430
1501 W. Shure Drive, Arlington Heights, IL 60004 uucp: !uunet!motcid!kutemj
--
Jon Kutemeier___________________________________________________________________
------------------Software Engineer /XX\/XX\ phone:(708) 632-5433
Motorola Inc. Radio Telephone Systems Group ///\XX/\\\ fax: (708) 632-4430
1501 W. Shure Drive, Arlington Heights, IL 60004 uucp: !uunet!motcid!kutemj

Lord Bah

unread,
Jan 28, 1991, 6:52:37 PM1/28/91
to
On the last project I worked on we held code inspections at each
implementation milestone. While they did get boring on occasion
we didn't find them particularly stressful. As they say, it's the
code that's being inspected, not the coder. I don't have any
numbers, but the project had about two orders of magnitude fewer
problems reported during QA.

> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure.

There were between 5 and 7 of us over the course of development.
The inspections lasted about 4 hours and covered in the neighborhood
of 1000 lines of code each. We found it ABSOLUTELY ESSENTIAL
that each person participating receive a copy of the code a few
days before the inspection and go through it on their own before
the inspection, otherwise massive time gets wasted as people read
the code during the inspection and there are fewer useful contributions
because people don't have any understanding of the code. On the
average call it 1 day of work for each person, or about 166 lines
of code per man-day (not bad, IEEE!).

> Not only that, but your average programmer was very likely attracted to
> programming in order to avoid social interaction and to create
> something under his/her personal control without anyone else watching.
> He/she is likely to be on the low end of the social tact scale and
> singularly unqualified to deal with this delicate situation.

An interesting insight. Not always true, of course, and often
countered by the drive to "show-off" in those who consider themselves
clever.

> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection

> 2) Only errors are communicated to the author. No criticism of style
> allowed.

BAH! I must disagree with both of these. The author must be present
to provide explanations when called for, and to note what the
inspection requires to be corrected. Style issues are also fair
game. Note that having a group coding standard helps immensely in
preventing religious wars during the code inspections (they basically
get moved to the time when you determine the coding standard).
You can then focus on proper functionality, maintainability,
adherence to standards, etc.

We found code inspections productive and useful without inducing
unnecessary stress.

--------------------------------------------------------------------
Jeff Van Epps amusing!lor...@bisco.kodak.com
lor...@cup.portal.com
sun!portal!cup.portal.com!lordbah

Richard Harter

unread,
Jan 29, 1991, 9:22:09 PM1/29/91
to
In article <1991Jan28.2...@ico.isc.com>, r...@ico.isc.com (Dick Dunn) writes:
> p...@megatest.UUCP (Patrick Powers) writes:
> ...
> > Though there is plenty of evidence that code inspections are cost
> > effective, I believe they would tend to be boring and stressful.
> > Boring because they are a time consuming and non-creative activity --
> > current issue of IEEE Software recommends 150 lines of code reviewed
> > per man-day as a good figure...

> Well, we all know that lines of code is a lousy measure of anything except
> the number of newlines (don't we?:-), but still, if this measure is any-
> where close to real, it's a much stronger argument that Powers suggests
> against code inspections. A halfway-decent programmer can produce several
> times that 150 l/d figure...proceeding through anything at 20 lines/hour
> (that's 3 minutes per line, effectively???) is too slow to feel productive.

Reality check time. One can write several hundred lines of code in one
session. However that is exceptional. Typical industry figures are
5-10 thousand lines of delivered code per year which is 25-50 lines/day.
Programmers who can average 100 lines/day are quite exceptional.

Interestingly enough these figures don't seem to vary a great deal with
language. The rate is somewhat higher for assembly language and for
verbose languages such as COBOL, but enough to compensate for the reduced
expressiveness.

Since many of us can and have written several hundred lines of code at
one sitting, why is the average rate so low? One obvious reason is that
once you have written the code you have to compile and debug it. Another
is that a fair percentage of ones time gets eaten up in non-programming
activities. Still another is that there is always a certain amount of
low level design work that must be done while writing code. (Anywhere
from 20-70% of the design work is done at coding time, depending upon
the methodology used.) Still another factor is that quite a fair percentage
of the code that is written ends up not being deliverable.

Let's check. 150 lines is 3-5 procedures worth of structured modular
code, i.e. about 1 low level modules every two hours *on average*. In a
decent code review you have to verify that all external interfaces are
correctly referenced and used, that each line of code is correct, and
that the code makes sense. You also want to verify that the modular
decomposition is appropriate and that the modules fit into the over-all
design. Granted that some reviews will go quite quickly. However the
average will probably be closer to that 150 lines/day.
structure is correct.
--
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb. This sentence short. This signature done.

leland.f.derbenwick

unread,
Jan 29, 1991, 5:57:47 PM1/29/91
to
In article <1991Jan28.2...@ico.isc.com>, r...@ico.isc.com (Dick Dunn) writes:
> p...@megatest.UUCP (Patrick Powers) writes:
> ...
> > Though there is plenty of evidence that code inspections are cost
> > effective, I believe they would tend to be boring and stressful.
> > Boring because they are a time consuming and non-creative activity --
> > current issue of IEEE Software recommends 150 lines of code reviewed
> > per man-day as a good figure...
>
> Well, we all know that lines of code is a lousy measure of anything except
> the number of newlines (don't we?:-), but still, if this measure is any-
> where close to real, it's a much stronger argument that Powers suggests
> against code inspections. A halfway-decent programmer can produce several
> times that 150 l/d figure...proceeding through anything at 20 lines/hour
> (that's 3 minutes per line, effectively???) is too slow to feel productive.

The figure was 150 lines per person-day: effort, not time. Since a
typical "full" code inspection involves (1) the author, (2) the
moderator, (3) the reader, (4,5) a couple of other inspectors, that
comes to 150 lines in about 1.6 hours per person, average. That
seems quite reasonable, assuming 150 LOC inspected per hour, plus a
reasonable amount of preparation time for all participants.

(There have been some studies indicating that you can get by with
the author, a combined reader/moderator, and one other inspector,
or similar "reduced" inspections, without letting too many more
errors get by. Assuming the same effort per individual, that
increases the inspection productivity to about 250 LOC per person-
day.)

On a separate issue, most references indicate an average productivity
across an entire project (counting effort for documentation, etc.)
somewhere in the range of 5 to 20 LOC per person-day. Given that
coding is something like 1/6 of total effort, that still leaves
typical coding rates (assuming the modules are fully designed) in
the range of 30 to 120 LOC per person-day, much less than you seem
to assume. (Certainly there are bursts at much higher rates, and
a few people [Ken Thompson?] can probably sustain much higher rates.
But it isn't common.)

-- Speaking strictly for myself,
-- Lee Derbenwick, AT&T Bell Laboratories, Warren, NJ
-- l...@cbnewsm.ATT.COM or <wherever>!att!cbnewsm!lfd

leland.f.derbenwick

unread,
Jan 29, 1991, 6:04:31 PM1/29/91
to
In article <73...@tekchips.LABS.TEK.COM>, r...@crl.labs.tek.com writes:
> In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:
> > In practice we have not found programmer's egos to be a major problem
> > to properly conducted Code Inspections. This, of course, assumes that
> > the Inspection process is actually following the defined cookbook approach,
> > complete with moderator who keeps the discussion on track and non personal
> > and a seperate reader who actually goes through the code (or design document:
> > Inspections work well for them as well) one piece at a time. In addition, it
> > is absolutely forbidden for someone's manager to help inspect his product or
> > to use the # of defects found by an inspection as part of performance rating.
> >
>
> I just don't see this as being realistic. What other job where a product
> (software in this case) is produced do you find people not being judged on
> the quality of their output? People absolutely have to be judged on how well
> they perform their jobs and if putting out high quality software is their
> job then their manager should be able to measure their job performance and
> react accordingly.

It depends what you see as the output. In my view, that's the code that
makes it past inspection, integration, and test, and goes out to the
customer. A higher-than-average rate of errors found in inspection might
mean that the programmer is bad, or it might mean that he/she writes
code that is so clearly readable that the inspection catches a greater
than average percentage of the errors. Similarly, a higher-than-average
rate of errors caught in integration and test might mean that the code
is bad, or the inspection was sloppy, or that the code was designed to
be thoroughly testable.

If you make errors caught at an early stage "evil", then people will do
everything they can to avoid catching them there: writing unclear code,
avoiding inspections, forcing inspections to be rushed, arguing over
what is and isn't a bug, etc., etc. And you might as well not bother
doing the inspections at all.

George W. Leach

unread,
Jan 30, 1991, 3:56:47 PM1/30/91
to
In article <61...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
>While not in any way arguing against your point, or about the utility of
>code inspections in general, isn't it about time that we started breaking
>our enfatuation with the *process* of building software (source code,
>style rules, programming language, lifecycle, methodology, software development
>process, CASE, etc, etc) and started concentrating on the *product*
>itself.

I think the reason we can't break away from this type of thinking is
that many organizations sorely need to concern themselves with improving
these processes. The current mode of operation is not sufficient.

>To me, the paradigm shift that we're facing is figuring out how to comprehend
>software products, which unlike manufactured things like firearm parts,
>are intangible...undetectable by the natural senses.

We need this as well, if for no other reason to aid in the learning
phase for new programmers who will take on maintenance responsibilities.

>I envision tools to assist in understanding the static and dynamic properties
>of a piece of code the way physicists study the universe, not by asking
>how it was built (a process question), but by putting it under test to
>determine what it does.

One thing that has always bothered me about topics like this is the
fact that software is not a natural phenomenon, but a man(person)-made
commodity. As such we should strive for understanding what we are building
(or growing, if you believe Harlan Mills) before we do so. Imagine if a
Civil Engineer built a bridge and then tried to understand its structure
or what it does :-) Certainly software is very different from any other
type of engineering or product development activity, but we need to have
a way to model the behavior of the system before we build it ala simulation
or some other technique to gain insight into the operations before we
commit it to code.

--
George W. Leach AT&T Paradyne
reg...@paradyne.com Mail stop LG-133
Phone: 1-813-530-2376 P.O. Box 2826
FAX: 1-813-530-8224 Largo, FL 34649-2826 USA

Alvin the Chipmunk Sylvain

unread,
Jan 29, 1991, 8:51:15 PM1/29/91
to
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:
> It seems that the problem with code inspections is largely emotional.
> Though there is plenty of evidence that code inspections are cost
> effective, I believe they would tend to be boring and stressful.
> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure. I know I would not want to do this, and
> who would? Stressful because it is out of the programmer's control,
> and because criticism is involved. People identify closely with their
> creations and find criticism painful.

Any kind of criticism must be tempered by the fact that, regardless of
how "lone wolfish" the programmer may be, s/he is ultimately part of a
team. It is the team which reviews the work, and the team which finds
problems.

Personality conflicts are a management problem, including letting the
people know that all criticism _shall be_ viewed constructively.

[...]


> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection

No, the author must be there. If s/he can't handle criticism, espe-
cially in a team context such as this, s/he needs to grow up some.
Again, this is part of management's job, to make sure that everyone
knows:
(a) Everybody makes mistakes, including You,
(b) We are earnestly looking for All mistakes, including Yours, so we
can remove them, and
(c) When we find Your mistakes, it doesn't mean You're Stupid or that
We Don't Love You. It just means that you fall into category (a)
like the rest of us, and, hallelujah, we succeeded in category (b).

> 2) Only errors are communicated to the author. No criticism of style allowed.

I agree with this to a degree. If the style is making understanding
difficult, that is a valid point to bring up with the author. Badly
spaghettied code, for example, must be avoided unless absolutely neces-
sary. Inconsistent style can lead to errors in understanding, and
again, must be avoided.

> I've toyed with the idea of instituting code inspections but just
> couldn't bear to be the instrument of a good deal of unhappiness.

[...]

I assume then that you are in management. Therefore, it is up to you to
mitigate this unhappiness. Believe me, it *can* be done. I suspect
that even the lonliest programmer appreciates an excuse to crawl out
from under the terminal, so long s/he feels there is a valid reason
to do so. Finding and removing errors is a valid reason. Perfecting
the team product is a valid reason.

Just let them know that the criticism _shall be_ constructive, and that
it _shall be_ rendered in a professional manner. No "nyahh-nyahh's"
allowed!
--
asyl...@felix.UUCP (Alvin "the Chipmunk" Sylvain)
=========== Opinions are Mine, Typos belong to /usr/ucb/vi ===========
"We're sorry, but the reality you have dialed is no longer in service.
Please check the value of pi, or see your SysOp for assistance."
=============== Factual Errors belong to /usr/local/rn ===============
UUCP: uunet!{hplabs,fiuggi,dhw68k,pyramid}!felix!asylvain
ARPA: {same choices}!felix!asyl...@uunet.uu.net

Pierre P. Blais

unread,
Jan 31, 1991, 10:15:59 AM1/31/91
to
I have had some experience with code inspections which I would
like to relate to you.

From my experience, I think that code inspections should be started
after a certain amount of (sanity) testing is done. This prevents
wasting the inspector's time in uncovering (obvious) defects. This
is the same argument as using the compiler to find syntax errors
instead of having a human spend time reading the code looking for
them.

Now, how do you decide when enough testing has been done and code
inspection should start? From empirical data, one can determine
how many defects are detected per person-hour of testing and code
inspection. When less defects are detected in one hour of testing
than there would be during an inspection, it is time to start in-
specting.

Code inspections have the advantage that they spread the knowledge
about the software to people other than the author. Selection of
inspectors should done on that basis. In addition, code comments
are reviewed for usability; there is no way to test for that by
running the code.

The main drawback to inspections is the drain on human resources.
If a code inspection team consists of four people including the
author, the author usually ends up "owing" three people. In this
case, developers usually spend three times more time inspecting
other people's code than theirs.

Also, some projects may be so large that a pace of 150 lines of
code per hour makes it impossible to inspect all code within a
reasonable period of time (taking into account no more than one
three hour inspection session per day to combat fatigue and bore-
dom).

All in all, when used judiciously, at the right time, and when they
are planned properly, code inspections are well worth their time.

--
Pierre P. Blais Bell-Northern Research
-----------------------------------------------------------------------
BITNET: ppb...@bnr.ca VOICE: (613) 763-4270
UUCP: uunet!bnrgate!bcars305!ppblais FAX: (613) 763-2626
LAND: P.O. Box 3511, Station C, Ottawa, Canada, K1Y 4H7
-----------------------------------------------------------------------
"Design defect fixes; don't just throw code at them."

Frank Wales

unread,
Jan 30, 1991, 4:30:35 PM1/30/91
to
Here are some datapoints and opinions based on personal experience.

In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:

>Boring because they are a time consuming and non-creative activity --

This implies that criticism has no part to play in creation, which I
do not believe.

>current issue of IEEE Software recommends 150 lines of code reviewed
>per man-day as a good figure.

The last project where we reviewed the code has a figure of about 10 lines
of code reviewed per minute (based on reviewing 8,500 lines of
product, which was done by the three authors in three working days).
Whoever reviews at a rate of one line per three minutes had better have
some pretty long lines of code.

>People identify closely with their creations and find criticism painful.

Criticism is in part an educational process; programmers who don't want
to learn what they do wrong, or what other people think of their work,
aren't the kind I'd depend on to produce quality products. If the people
involved in the project give a damn about doing their best, they will
quickly come to enjoy the code review process as a learning experience;
maybe it will also deepen the respect they have for each other's work,
which is a valuable team-building tactic that good managers can exploit.

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

>[other vaguely insulting generalisations deleted]

It seems to me that maybe you're working with too many low-grade
code-grinders. Hire some actual software professionals in their place.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection

Bad idea. Very bad. To apply a courtroom metaphor, it would be like
denying the accused the ability to be present, and give his own version
of the events. At the code reviews I run, the author of the code is
the reader of the code. It is the responsibility of the others present
to convince the author that code is poor, where this is appropriate.
The principal software engineer is responsible for deciding issues of
style, and the project manager has final say on what goes in the actual
product. Interruptions are not allowed to enter the code review room,
while disagreements are not allowed to leave it; they must be resolved
before people go back to writing software.

>2) Only errors are communicated to the author. No criticism of style allowed.

Also a bad idea. If I can't understand something, I don't care if it works,
because my confidence in it is reduced. Style may not matter at run-time,
but it certainly matters at read-time and think-time. If you go under a bus,
I don't want to have to hire a medium to figure out how to fix your code.

Other random comments: I use scheduled code reviews as a place to
resolve implementation details which were decided upon on the fly by
a programmer when the design documents or other colleagues could not
give an authoritative answer when the code was written. In this regard,
they are a valuable way of helping to analyse the many small-but-important
decisions that get taken during software construction. A second
important use is to allow each programmer to become familiar with the
work of his colleagues, which is a combined educational and confidence-
building exercise. And a third use is to allow programmers to teach each
other tricks and techniques that have never been explicitly communicated or
written down anywhere else but in the code itself. Hold the whole review
off-site if you can, in a quiet room with plenty of paper, pencils,
a flipchart and {black|white}board, coffee, Pepsi and lots of donuts.

I believe code review is a valuable tool, and avoiding it for what amounts
to egotistical reasons serves neither the developers nor the customer.
[FYI: I usually review my own code during a project, even if I am
the sole author -- just like Oscar Wilde, I enjoy a good read :-).]
--
Frank Wales, Grep Limited, [fr...@grep.co.uk<->uunet!grep!frank]
Kirkfields Business Centre, Kirk Lane, LEEDS, UK, LS19 7LX. (+44) 532 500303

Brian Marick

unread,
Feb 1, 1991, 10:19:45 AM2/1/91
to
ppb...@bcars305.bnr.ca (Pierre P. Blais) writes:

>From my experience, I think that code inspections should be started
>after a certain amount of (sanity) testing is done. This prevents
>wasting the inspector's time in uncovering (obvious) defects. This
>is the same argument as using the compiler to find syntax errors
>instead of having a human spend time reading the code looking for
>them.

I have a similar approach. I find that inspections and testing are
good at discovering different kinds of faults. For example, dynamic
testing is poor at discovering what Dewayne Perry calls "obligation
faults", cases where, for example, heap-allocated memory is not freed
or open files are not closed. But dynamic testing (when backed up by
tools to measure test suite coverage) is effective at discovering
off-by-one errors or wrong-variable-used errors; consequently, it's a
waste to check for these during code reads. (It's a waste because, I
believe, the long-term cost of detecting a fault with a code read is
higher than with dynamic testing. The reason is maintenance: that
dynamic test, if written sensibly, can be rerun quite cheaply.
"Rerunning" a code read when you change a module is roughly the same
as the cost of the original code read.)

What I do nowadays is keep a catalog of explicit questions to ask when
reading the code. I test a module by writing black box tests (pretty
much the standard technique, except that I maintain a catalog of
special test cases for data structures, operations, and combining
rules that recur in specifications), then I look inside the code to
write more black-box-style tests based on the cliches I find there.
(The idea being that people implementing cliched code tend to make
cliched mistakes.) At this point, while I'm reading the code anyway,
I apply the inspection checklist to look for those kinds of faults I
expect the other tests won't catch. Then I run the tests and add new
ones until I've satisfied branch, loop, multi-condition, and weak
mutation coverage.

This seems to work pretty well and hasn't been as time-consuming as
I'd earlier expected.

Brian Marick
Motorola @ University of Illinois
mar...@cs.uiuc.edu, uiucdcs!marick

Alan R. Weiss

unread,
Jan 31, 1991, 8:33:26 PM1/31/91
to al...@tivoli.com
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.

Actually, my belief is that it is *also* temporal: developers believe
that it takes up a lot of time. When proven otherwise, they are more
receptive. At that point, *some* people have reservations based upon
BAD inspections, or poorly-trained inspectors and moderators, or
simply the THOUGHT of inspections. Also heresay plays a part.

>Though there is plenty of evidence that code inspections are cost
>effective, I believe they would tend to be boring and stressful.

Its a funny thing about boredom. In a small start-up like ours
(Tivoli Systems), everyone has a stake in the success of our firm.
If something is proven cost-effective, our developers are absolutely
behind it 110%. If it is a time-waster, they chafe. The challenge
is two-fold: first, to PROVE the merits of inspections, and that can
be done in a number of ways: case histories, measuring your own
development/quality statistics, cost analysis, faith, etc. :-)

The second challenge is more fundamental: how do you get developers
to view themselves as software engineers (i.e. professionals)?
How do you get developers in BIG corporations with a diffused sense of
ownership and responsibility (and reward) to get excited about cost savings?
And THAT is a management challenge. Good management is constantly selling
ideas and motivating their staff, challenging them to link the development
plan with the business plan in their own minds, and thence to THEIR plans.

>Boring because they are a time consuming and non-creative activity --
>current issue of IEEE Software recommends 150 lines of code reviewed
>per man-day as a good figure. I know I would not want to do this, and
>who would?

So, I can assume that you are advocating that no one actually READS
the source code? No? Then why not actually track issues and actions?
Inspections, done well, SAVE time. Guaranteed! Besides, the time
saved is the back-end development time (you know, the ol' release/
test the bugs/return to development/fix the bugs/ad nauseum cycle.
Developers get REAL bored with fixing bugs all the time, right?)

At Tivoli, I am VERY fortunate to have a group of developers and
managers who believe in inspections (a QA Manager's dream!). We
started this process, and guess what? The developers are totally
convinced that it will save them LOTS of time later. We're finding
bugs in all kinds of deliverables (specifications, manuals, source, etc).

I promise to keep this newsgroup appraised of the process (special
thanks goes out to Kerry Kimbrough, a very brave Development Manager
indeed who started this process at Tivoli).

>Stressful because it is out of the programmer's control,
>and because criticism is involved. People identify closely with their
>creations and find criticism painful.

Bzzt! Wrong. In Fagin Inspections (as modified by Tom Gilb),
ONLY the developer gets to prioritize and rank the incoming defects.
The actual inspections occur off-line individually, and the
Defect Logging Meeting is simply a fast recording of defects. Afterward,
the developer MUST respond to every item, but can in fact choose his/her
response based upon engineering principles. QA's function is to serve
in a consulting capacity, continually working with the community to
correlate requirements with design with implementation.

Besides, haven't you read Gerald Weinberg's "The Pyschology of
Computer Programming?" Ever here of ego-less programming? All
programming is iterative!!! Its just a question of solving problems
earlier in the product's life, or later (i.e. in support).

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

Maybe. But I don't *think* so. Programming is
an intensely SOCIAL activity. In anything other than a one-person
shop, programmers MUST interact with each other. Sure, the culture
is different that interactions between, say, hairdressers in a salon
(hello, Laurelyn!). But there is ALWAYS a prevailing culture, and
that implies society. Again, Weinberg is the guru in this.

>He/she is likely to be on the low end of the social tact scale and
>singularly unqualified to deal with this delicate situation.

Lucky thing I'm reading this before my staff sees this! They would
take exception to this, and so do I. Programmers may be different,
but the stereotype of a hacker-nerd is insulting and gross.

>Again, this may very well have attracted them to programming: it doesn't
>matter whether anyone likes their personality, all that counts is
>whether the program works.

Maybe in school. They don't teach software engineering, or how to actually
run a business based around software in most CS programs. Programmer's
find out FAST how the real world works when someone actually pays them
(large sums of) money. They find out that "working" is trivial;
its optimization, schedule, cost, maintainability, and a number of other
factors that count. Including teamwork, dude.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection
>2) Only errors are communicated to the author. No criticism of style allowed.

You need to study Fagin. Also, Kerry (and some of the net.people :-)
turned me onto Tom Gilb: absolutely fabulous stuff. Your ideas
are too primitive. You really need to study this subject first.
Lemme know if you want help!

>I've toyed with the idea of instituting code inspections but just
>couldn't bear to be the instrument of a good deal of unhappiness.

I assume that you don't want to tell them that they are laid-off,
either, due to lack of sales, right?

Is a friend someone who tells you nice-sounding lies, or is a friend
someone who tells you the truth?

Whatever happened to courage? You get courage by being sure of your
facts, by researching matters, and by measuring success and then
selling the hell out of it. Test the waters!

>It seems to me that it could work with programmers directly out of college
>who feel in need of guidance. It also might succeed in a large
>paternalistic organization as these would be more likely to attract
>group oriented engineers. Note that the classic studies of code
>inspection occurred at mammoth IBM.

Yet often even IBM does not do inspections (I speak from personal
experience). Yes, they have studied inspections and methodology for
over 20 years (so has TRW), but then again they have the money to
do pure research (see also Software Engineering Institute at
CMU and Purdue's program). Still, inspections work regardless
of organization size.


>In spite of all this, I think code inspections would be accepted in any
>application where there is a clear need such as the space shuttle
>program where reliability is crucial and interfaces are complex.

Another funny thing, Patrick: why do people think that life-threatening
applications are more important that business-threatening applications?
To a small business owner, software bugs can literally KILL his/her
business. To THEM, there is a clear need, right? They would rather die
then see their "baby" croak.

>... On the other hand in routine applications with a good


>deal of boiler plate code they could be a "real drag", exacerbating the
>humdrum nature of the task.

Maybe. But "boiler-plating" should be treated as such. However,
I've seen cases where people *think* its template, but its not.
Expensive, evil cases. Costly cases. And THAT is a REAL drag :-)

.-------------------------------------------------------.
| Alan R. Weiss |
| Manager, QA and Mfg. _______________________________|
| Tivoli Systems, Inc. | These thoughts are yours for |
| Austin, Texas, US | the taking, being generated |
| 512-794-9070 | by a program that has failed |
| al...@tivoli.com | the Turing Test. *value!=null;|
|_______________________________________________________|
|#include "std.disclaimer" --- Your mileage may vary! |
.-------------------------------------------------------.

Der Grouch

unread,
Feb 1, 1991, 12:17:32 PM2/1/91
to
In article <61...@stpstn.UUCP> c...@stpstn.UUCP (Brad Cox) writes:
While not in any way arguing against your point, or about the utility of
code inspections in general, isn't it about time that we started breaking
our enfatuation with the *process* of building software (source code,
style rules, programming language, lifecycle, methodology, software development
process, CASE, etc, etc) and started concentrating on the *product*
itself.

Well, I'm not sure I totally understand your statement. To me, every
product is the end result of a process. In particular, if we are going to
use tools of some sort, we are going to thereby impose a process on the tool
users. For example, you say...

I envision tools to assist in understanding the static and dynamic
properties of a piece of code the way physicists study the universe, not
by asking how it was built (a process question), but by putting it under
test to determine what it does.

This paragraph, to me, seems to be advocating one kind of process over
another. I'm not saying that you're right or wrong in advocating the "test
and see what it does" process, but I am saying that it's a process like any
other.

Have I missed your point entirely?

--
--Alan Wexelblat phone: (508)294-7485
Bull Worldwide Information Systems internet: w...@pws.bull.com
"Honesty pays, but it doesn't seem to pay enough to suit some people."

Cameron Laird

unread,
Feb 1, 1991, 6:22:01 PM2/1/91
to
In article <1991Feb1.2...@spool.cs.wisc.edu> dpa...@shorty.cs.wisc.edu (David Parter) writes:
.
.
.
> 6. If the reviewers do not understand the code, or how it fits
> into something else, DO NOT RELY ON THE AUTHOR to clarify.
>
> I was the author for a sets of changes to some existing code.
> The review focused on the parts I changed, not on the program
> as a whole. I was called upon to give an overview of how it all
> fit together, and what I had changed. My overview was accepted,
> the code was approved, and a few days later I found numerous
> errors that should have been found during the code review (or
> design review, which is sometimes difficult for modifications
> to existing code) -- and weren't, because they believed my
> explanation of how things worked, which turned out to be wrong.
> Since no members of the review team had an understanding of the
> "big picture," at least one of them should have been charged
> with doing a more detailed review of the "big picture" in order
> to provide the assurances that various assumtions and/or
> asertions were valid.
.
.
.
This is an IMPORTANT rule, and one which applies far, far beyond
inspections. One of the best things we can do for each other is
to cultivate the attitude that authors don't get to explain them-
selves (well, they do, but only on special occasions). Authors
*must* push themselves to write so that they can be understood;
if readers/reviewers/inspectors don't understand, that is (in gen-
eral) a sign that the author needs to rewrite (most often:
comment more clearly) what he or she has written. One of the
nice things about this rule is that it's easy to teach: each time
an author starts, "Well, what I'm trying to do there is ...", his
or her colleagues need immediately to remind, "Then *say* ..., IN
THE SOURCE."
--

Cameron Laird USA 713-579-4613
c...@lgc.com USA 713-996-8546

David Parter

unread,
Feb 1, 1991, 4:47:50 PM2/1/91
to
At my previous place of employment, some type of code review/read/inspection
was accepted as normal. Unfortunatly, there was no clear agreement on
what exactly this event was all about (thus the multiple choice name),
so some code reviews were more detailed than others.

Here, however, I offer some observations about the process. Of course,
this is all anecdotal, and your mileage will vary...

1. The more comfortable the author (and the reviewers) are with
the others involved in the review, and with the review process,
the more productive the review will be, and the more likely to
avoid ego battles. Of course, if everyone is everyone's best
friend and therefore refrains from offering valid criticism,
then it is a waste of everyone's time.

2. The more experience all the participants have with code
reviews, the more effective they are. Once everyone knows what
to expect ("yes, they will criticize my comments, and if my
comments aren't telling them what the code does, then I guess
they are right, my comments aren't good enough"), and has had a
few chances to be both reviewer and author, they are
comfortable with offering and taking criticism.

3. MAKE SURE TO STAY WITHIN THE AGREED PURPOSE OF THE REVIEW. The
worst review I have ever heard of (I was not there, I was down
the hall and heard a lot of it) was the following situation:

The code author was a new hire (w/in 6 months), just out of
college. It was the first time his code had been subject to
review. His manager was not at the review (he was not in town,
in fact). One of the reviewers was a senior person who had
been invovled in the initial project plan (of which this was a
small, but independent part), but not at any of the subsequent
reviews related to this part (functional specification,
design).

The senior reviewer proceeded to turn the code review into a
design review, ripping the code and design to shreds, and
producing a new design at the meeting. No one stopped him, and
no one defended the programmer. Needless to say, the programmer
was very upset. When his manager returned, and heard what had
happened, he was not pleased either -- but he knew that it
wasn't the programmer's fault, and told him so.

4. Good code reviews, with "tough" reviewers often leads to better
code because the author makes an effort to prepare for the
review. He or she may read through the code, anticipating the
comments of the reviewers, and improving things beforehand.
What would have been "good enough" in the past gets improved by
the best person to do the improving -- the original coder.

5. One poster mentioned that the novice programmer presenting his
or her code for review the first time may be very upset by the
review. One way to soften the blow is have the novice
participate as an extra reviewer (it was also mentioned that he
or she may be wary of criticizing senior people, so "extra"
because they may not be that productive) at several reviews of
other parts of the project, with (some or all of) the same
reviewers who will be reviewing his or her code, so that he or
she will be used to their personal style and will know what to
expect.

6. If the reviewers do not understand the code, or how it fits
into something else, DO NOT RELY ON THE AUTHOR to clarify.

I was the author for a sets of changes to some existing code.
The review focused on the parts I changed, not on the program
as a whole. I was called upon to give an overview of how it all
fit together, and what I had changed. My overview was accepted,
the code was approved, and a few days later I found numerous
errors that should have been found during the code review (or
design review, which is sometimes difficult for modifications
to existing code) -- and weren't, because they believed my
explanation of how things worked, which turned out to be wrong.
Since no members of the review team had an understanding of the
"big picture," at least one of them should have been charged
with doing a more detailed review of the "big picture" in order
to provide the assurances that various assumtions and/or
asertions were valid.

Good luck in your reviewing,

--david
--
david parter dpa...@cs.wisc.edu

Bob Martin

unread,
Jan 29, 1991, 12:55:53 PM1/29/91
to
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.
[... stuff removed about how inspections would be boring and
painful since they are critical and non creative...]

Inspections are a way for many engineers to learn about the creation of
another. They are also a way for engineers to insure that their creations
are complete and error free. In my experience, inspections are not
nearly as painful as shipping bugs to customers. In fact I find inspections
to be quite painless. When bugs are found everyone (including the author)
breaths a sigh of relief that the problem was caught early.

>
>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

This is a generalization which borders on bigotry. All software engineers
are not socially inept. I certainly wouldn't want to work for you if
I knew that this was your view.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection
>2) Only errors are communicated to the author. No criticism of style allowed.

I think the author _must_ be present so that s/he can explain and defend
the work. Critisism should be restricted to _real_ errors and violations
of written standards and procedures.
>

>I've toyed with the idea of instituting code inspections but just
>couldn't bear to be the instrument of a good deal of unhappiness.

Inspections are instruments of happiness. Customers are happier, managers
are happier, engineers are happier. Inspections are investments in the
long term health of the product. This is something that almost any
engineer can identify with.

>It
>seems to me that it could work with programmers directly out of college
>who feel in need of guidance. It also might succeed in a large
>paternalistic organization as these would be more likely to attract
>group oriented engineers.

Another unwarranted generalization. Inspections work with anyone who
truly cares about the project/product they are working on.


>
>In spite of all this, I think code inspections would be accepted in any
>application where there is a clear need such as the space shuttle
>program where reliability is crucial and interfaces are complex.

Are you saying you do not have a clear need to produce high quality
software. Given that the cost of fixing errors is multiplied by
many orders of magnitude if the errors get to the field, don't you
have a clear need to protect your organization's investment by making
sure bugs are fixed as early as possible?

IMHO inspections should be performed by all software organizations, big
to small, working on any kind of project, critical to recreational. There
is no excuse for not checking your work.

--
+-Robert C. Martin-----+:RRR:::CCC:M:::::M:| Nobody is responsible for |
| rma...@clear.com |:R::R:C::::M:M:M:M:| my words but me. I want |
| uunet!clrcom!rmartin |:RRR::C::::M::M::M:| all the credit, and all |
+----------------------+:R::R::CCC:M:::::M:| the blame. So there. |

Alan R. Weiss

unread,
Feb 2, 1991, 11:33:14 AM2/2/91
to al...@tivoli.com
In article <14...@megatest.UUCP> p...@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.

Actually, my belief is that it is *also* temporal: developers believe


that it takes up a lot of time. When proven otherwise, they are more
receptive. At that point, *some* people have reservations based upon
BAD inspections, or poorly-trained inspectors and moderators, or
simply the THOUGHT of inspections. Also heresay plays a part.

>Though there is plenty of evidence that code inspections are cost


>effective, I believe they would tend to be boring and stressful.

Its a funny thing about boredom. In a small start-up like ours


(Tivoli Systems), everyone has a stake in the success of our firm.
If something is proven cost-effective, our developers are absolutely
behind it 110%. If it is a time-waster, they chafe. The challenge
is two-fold: first, to PROVE the merits of inspections, and that can
be done in a number of ways: case histories, measuring your own
development/quality statistics, cost analysis, faith, etc. :-)

The second challenge is more fundamental: how do you get developers
to view themselves as software engineers (i.e. professionals)?
How do you get developers in BIG corporations with a diffused sense of
ownership and responsibility (and reward) to get excited about cost savings?
And THAT is a management challenge. Good management is constantly selling
ideas and motivating their staff, challenging them to link the development
plan with the business plan in their own minds, and thence to THEIR plans.

>Boring because they are a time consuming and non-creative activity --


>current issue of IEEE Software recommends 150 lines of code reviewed
>per man-day as a good figure. I know I would not want to do this, and
>who would?

So, I can assume that you are advocating that no one actually READS


the source code? No? Then why not actually track issues and actions?
Inspections, done well, SAVE time. Guaranteed! Besides, the time
saved is the back-end development time (you know, the ol' release/
test the bugs/return to development/fix the bugs/ad nauseum cycle.
Developers get REAL bored with fixing bugs all the time, right?)

At Tivoli, I am VERY fortunate to have a group of developers and
managers who believe in inspections (a QA Manager's dream!). We
started this process, and guess what? The developers are totally
convinced that it will save them LOTS of time later. We're finding
bugs in all kinds of deliverables (specifications, manuals, source, etc).

I promise to keep this newsgroup appraised of the process (special
thanks goes out to Kerry Kimbrough, a very brave Development Manager
indeed who started this process at Tivoli).

>Stressful because it is out of the programmer's control,


>and because criticism is involved. People identify closely with their
>creations and find criticism painful.

Bzzt! Wrong. In Fagin Inspections (as modified by Tom Gilb),


ONLY the developer gets to prioritize and rank the incoming defects.
The actual inspections occur off-line individually, and the
Defect Logging Meeting is simply a fast recording of defects. Afterward,
the developer MUST respond to every item, but can in fact choose his/her
response based upon engineering principles. QA's function is to serve
in a consulting capacity, continually working with the community to
correlate requirements with design with implementation.

Besides, haven't you read Gerald Weinberg's "The Pyschology of
Computer Programming?" Ever here of ego-less programming? All
programming is iterative!!! Its just a question of solving problems
earlier in the product's life, or later (i.e. in support).

>Not only that, but your average programmer was very likely attracted to


>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

Maybe. But I don't *think* so. Programming is


an intensely SOCIAL activity. In anything other than a one-person
shop, programmers MUST interact with each other. Sure, the culture
is different that interactions between, say, hairdressers in a salon
(hello, Laurelyn!). But there is ALWAYS a prevailing culture, and
that implies society. Again, Weinberg is the guru in this.

>He/she is likely to be on the low end of the social tact scale and


>singularly unqualified to deal with this delicate situation.

Lucky thing I'm reading this before my staff sees this! They would


take exception to this, and so do I. Programmers may be different,
but the stereotype of a hacker-nerd is insulting and gross.

>Again, this may very well have attracted them to programming: it doesn't


>matter whether anyone likes their personality, all that counts is
>whether the program works.

Maybe in school. They don't teach software engineering, or how to actually


run a business based around software in most CS programs. Programmer's
find out FAST how the real world works when someone actually pays them
(large sums of) money. They find out that "working" is trivial;
its optimization, schedule, cost, maintainability, and a number of other
factors that count. Including teamwork, dude.

>In order to reduce these problems the following has been suggested:


>1) The author not be present at the inspection
>2) Only errors are communicated to the author. No criticism of style allowed.

You need to study Fagin. Also, Kerry (and some of the net.people :-)


turned me onto Tom Gilb: absolutely fabulous stuff. Your ideas
are too primitive. You really need to study this subject first.
Lemme know if you want help!

>I've toyed with the idea of instituting code inspections but just


>couldn't bear to be the instrument of a good deal of unhappiness.

I assume that you don't want to tell them that they are laid-off,


either, due to lack of sales, right?

Is a friend someone who tells you nice-sounding lies, or is a friend
someone who tells you the truth?

Whatever happened to courage? You get courage by being sure of your
facts, by researching matters, and by measuring success and then
selling the hell out of it. Test the waters!

>It seems to me that it could work with programmers directly out of college


>who feel in need of guidance. It also might succeed in a large
>paternalistic organization as these would be more likely to attract
>group oriented engineers. Note that the classic studies of code
>inspection occurred at mammoth IBM.

Yet often even IBM does not do inspections (I speak from personal


experience). Yes, they have studied inspections and methodology for
over 20 years (so has TRW), but then again they have the money to
do pure research (see also Software Engineering Institute at
CMU and Purdue's program). Still, inspections work regardless
of organization size.

>In spite of all this, I think code inspections would be accepted in any
>application where there is a clear need such as the space shuttle
>program where reliability is crucial and interfaces are complex.

Another funny thing, Patrick: why do people think that life-threatening


applications are more important that business-threatening applications?
To a small business owner, software bugs can literally KILL his/her
business. To THEM, there is a clear need, right? They would rather die
then see their "baby" croak.

>... On the other hand in routine applications with a good


>deal of boiler plate code they could be a "real drag", exacerbating the
>humdrum nature of the task.

Maybe. But "boiler-plating" should be treated as such. However,

Alan R. Weiss

unread,
Feb 2, 1991, 11:55:51 AM2/2/91
to
In article <29...@mimsy.umd.edu> c...@tove.cs.umd.edu (Christopher Lott) writes:
>In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:
>>> In addition, it
>>> is absolutely forbidden for someone's manager to help inspect his product or
>>> to use the # of defects found by an inspection as part of performance rating.
>>>
>
><73...@tekchips.LABS.TEK.COM> r...@crl.labs.tek.com (Jim Nusbaum) replies:
>>
>>I just don't see this as being realistic. What other job where a product
>>(software in this case) is produced do you find people not being judged on
>>the quality of their output?
>
>I think Mr. Nusbaum misses the point slightly. A manager or group
>leader who introduces code inspections into a s/w environment must
>assure the group that "number of faults found during inspections"
>will not suddenly dominate performance appraisals.

As a manager, I find one-dimensional performance appraisals to be
a sign of poor management :-) In general, I'm not a believer in
any correlation between the quantity of defects found during inspections
and total aggregate worth of an individual to a firm, for the following
reasons:

1. Good inspectors find lots of bugs.
2. Good inspectors don't find lots of bugs in "good" code.
3. Not all code is created equal.
4. Not all problems have the same degree of difficulty
in achieving design.
5. Not all schedules are created equal. ;-) Most are
in fact created out of vapor :-(
6. Bad inspectors don't find bugs, but may be good programmers.

etc.

Rather, I would prefer to reserve judgement until Build/Configuration
Management: if your code builds, you get to pass. If your code is
clean during Functional and System Test, you win. If my Find Rate
is really low during testing, you get a bonus. If YOU find errors
in your own code during testing, you get to stay around nights and
fix it ...


>Of course people must be judged on the quality of their work - but
>define software quality for me ;-)

Must we? C'mon, lets be *real* here, Chris. Conformance to standards.
Less than target Find Rates. The cost of doing it over again. Etc.
We *could* go on and get religious, or we *could* accept the
fact that quality CAN be measured, even in software.

>Number of faults detected in inspections is important, but beware
>of attaching too much importance to this figure.

I absolutely agree. Still, as people get better at inspections,
the name of the game really IS getting those defect Finds during
inspections way up. The trick is to not make it personal, but acknowledge
that software engineering is iterative and social in nature (see Weinberg,
Boehm, et. al).

>Joe may make silly logic errors, but his skill at design and spotting
>problems early are invaluable. You don't want to stifle such a person's
>abilities.

By the same token, Joe better realize that problems delayed are
more expensive problems. Tying his bonus into this is management's
little way of reminding him of basic engineering principles. And
business principles.

> Likewise, if detecting faults in another person's work directly
>results in negative performance ratings, think of the chilling effect this
>could have on peer review in a friendly group - or the inflammatory opposite
>result among antagonists.

Yep, this is true. Inspections are not some mid-80's trendy
management fad used to beat people over the heads. It is about
business survival and growth, about improving quality, and about
turning back the tide of American demise. The "enemy" is not
overseas; it is us. And its within our power to change that.

>
>chris...
>--
>Christopher Lott Dept of Comp Sci, Univ of Maryland, College Park, MD 20742
> c...@cs.umd.edu 4122 AV Williams Bldg 301-405-2721 <standard disclaimers>

.-------------------------------------------------------.

kam...@iccgcc.decnet.ab.com

unread,
Feb 5, 1991, 1:08:11 PM2/5/91
to
In article <3...@tivoli.UUCP>, al...@tivoli.UUCP (Alan R. Weiss) writes:
> In article <29...@mimsy.umd.edu> c...@tove.cs.umd.edu (Christopher Lott) writes:
>>In article <40...@genrad.UUCP>, m...@genrad.com (Mark A. Swanson) writes:
[...]

> I absolutely agree. Still, as people get better at inspections,
> the name of the game really IS getting those defect Finds during
> inspections way up. The trick is to not make it personal, but acknowledge
> that software engineering is iterative and social in nature (see Weinberg,
> Boehm, et. al).
>
[...]
There seems to have been little discussion in this thread on the training
required for doing inspections. Creating the atmosphere for, and then
running proper inspections takes time, money, training, schedule impact,
and potentially changes in either the psychology and/or staff of the
organization. Are they valuable? Absolutely. Are they free? Not initially.
The resultant value for some organizations has been measured as is part of the
literature. IMHO most organizations have to go through some type of
justification of the value, and should continue to measure the value in
terms of $ and time.

GXKambic
Allen-Bradley
Standard disclaimers.

Don Miller

unread,
Feb 5, 1991, 5:15:34 PM2/5/91
to
>[...]
>There seems to have been little discussion in this thread on the training
>required for doing inspections. Creating the atmosphere for, and then
>running proper inspections takes time, money, training, schedule impact,
>and potentially changes in either the psychology and/or staff of the
>organization. Are they valuable? Absolutely. Are they free? Not initially.
>The resultant value for some organizations has been measured as is part of the
>literature. IMHO most organizations have to go through some type of
>justification of the value, and should continue to measure the value in
>terms of $ and time.
>
>GXKambic
>Allen-Bradley
>Standard disclaimers.

There has also been no discussion on how to make most effective
use of the time and personnel resources allocated to code reviews.
Is everyone really still just giving out hard copy of source,
hoping reviewers can figure it out, and then getting together
to hammer it out?

I envision a code review process which makes use of tools and
practices designed to minimize resource consumption. Static
analysis tools would be valuable towards understanding foreign
code. An on-line reviewer which cataloged comments of the
reviewer would facilitate pre-meeting information gathering.
A projection system accessing the actual code could be used
as a guide during the review.

I'm proposing automation, or even just optimal manual techniques,
as a way of addressing the primary concern regarding code reviews.
Some of the tools that I've mentioned above exist and the others
aren't difficult to imagine. Does anyone out there use these
techniques or are we still in the stone age?

--
Don Miller | #include <std.disclaimer>
Software Quality Engineering | #define flame_retardent \
Sun Microsystems, Inc. | "I know you are but what am I?"
do...@eng.sun.com |