[comp.software-eng] Code Inspections

pat@megatest.UUCP (Patrick Powers) (01/28/91)

It seems that the problem with code inspections is largely emotional.
Though there is plenty of evidence that code inspections are cost
effective, I believe they would tend to be boring and stressful.
Boring because they are a time consuming and non-creative activity --
current issue of IEEE Software recommends 150 lines of code reviewed
per man-day as a good figure.  I know I would not want to do this, and
who would?  Stressful because it is out of the programmer's control,
and because criticism is involved.  People identify closely with their
creations and find criticism painful.

Not only that, but your average programmer was very likely attracted to
programming in order to avoid social interaction and to create
something under his/her personal control without anyone else watching.
He/she is likely to be on the low end of the social tact scale and
singularly unqualified to deal with this delicate situation.  Again,
this may very well have attracted them to programming: it doesn't
matter whether anyone likes their personality, all that counts is
whether the program works.

In order to reduce these problems the following has been suggested:
1) The author not be present at the inspection
2) Only errors are communicated to the author.  No criticism of style allowed.

I've toyed with the idea of instituting code inspections but just
couldn't bear to be the instrument of a good deal of unhappiness.  It
seems to me that it could work with programmers directly out of college
who feel in need of guidance.  It also might succeed in a large
paternalistic organization as these would be more likely to attract
group oriented engineers.  Note that the classic studies of code
inspection occurred at mammoth IBM.

In spite of all this, I think code inspections would be accepted in any
application where there is a clear need such as the space shuttle
program where reliability is crucial and interfaces are complex.  In
such cases code inspections are clearly a necessity, and engineers
might welcome --or at least, tolerate -- them as essential to getting
the job done.  On the other hand in routine applications with a good
deal of boiler plate code they could be a "real drag", exacerbating the
humdrum nature of the task.

-- 
--

mas@genrad.com (Mark A. Swanson) (01/28/91)

In practice we have not found programmer's egos to be a major problem
to properly conducted Code Inspections.  This, of course, assumes that
the Inspection process is actually following the defined cookbook approach,
complete with moderator who keeps the discussion on track and non personal
and a seperate reader who actually goes through the code (or design document:
Inspections work well for them as well) one piece at a time. In addition, it
is absolutely forbidden for someone's manager to help inspect his product or
to use the # of defects found by an inspection as part of performance rating.

It helps sociologically, I suspect, if the first few pieces of code inspected
are from the senior technical people.  (I have certainly found inspections
useful.)

The major problem is in scheduling if the process model does not include
inspections.  They do take time and there are limits to how many anyone
can go through per week (about 2 max, I think.)  This tends to make Inspections
a major time block on the project pert chart (even if broken up by area) and
therefore they are very hard to add in to an existing schedule.

The problems are all solvable, but it requires full project and technical
management support to introduce this or any other significant innovation
that changes how one develops software.  If ego problems are blocking
inspections, then one isn't running inspections right.

	Mark A Swanson
	Senior Principal Engineer
	GenRad, Concord, MA
	mas@genrad.com

rcd@ico.isc.com (Dick Dunn) (01/29/91)

pat@megatest.UUCP (Patrick Powers) writes:
...
> Though there is plenty of evidence that code inspections are cost
> effective, I believe they would tend to be boring and stressful.
> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure...

Well, we all know that lines of code is a lousy measure of anything except
the number of newlines (don't we?:-), but still, if this measure is any-
where close to real, it's a much stronger argument that Powers suggests
against code inspections.  A halfway-decent programmer can produce several
times that 150 l/d figure...proceeding through anything at 20 lines/hour
(that's 3 minutes per line, effectively???) is too slow to feel productive.

> ...Stressful because it is out of the programmer's control,
> and because criticism is involved.  People identify closely with their
> creations and find criticism painful.

Criticism may be somewhat painful inherently, but again I'm going to speak
about a "halfway-decent programmer" and say that such a person has long ago
transcended deriving personal injury from criticism of the code.  Good
grief, the *compiler* picks your code apart early on.  There are enough
opportunities to confront one's human frailty and fallibility in a day of
programming that I don't think this holds water.  Sure, there are prima
donnas and cowboy kids in the programming world, but they're not in the
mainstream.  Train 'em to accept criticism or get rid of 'em!

In my experience, when I hear a programmer (of the type I know/respect and
have been around for years) who's looking at code say something like
"That's idiotic!  That's absurd!  There's no way in hell that could
possibly work, you bozo!" - it's 95% certain he's talking about his own
code.

> Not only that, but your average programmer was very likely attracted to
> programming in order to avoid social interaction and to create
> something under his/her personal control without anyone else watching.

This is a fun thing when we joke about it, but it's pretty crappy to
pretend that it's serious.  I think the average programmer was attracted to
programming either because of the $ or because programming is fun/inter-
esting.  Sometimes, long stints at the terminal leave you without much
social interaction for a while, so it's at least plausible to hypothesize
that "the average programmer" can handle a low level of social interaction.
That doesn't mean it's sought out.  Don't confuse correlation with
causality.

> He/she is likely to be on the low end of the social tact scale and
> singularly unqualified to deal with this delicate situation...

If you've got a bunch of low-tact people, the situation isn't delicate!

> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection

That means any minor question will have to be transcribed instead of being
answered on the spot.  It eliminates useful feedback.

Also, if you've got the class of delicate egotists you've described, it
means the author will be fretting about what people are saying about his
precious code behind his back.

> 2) Only errors are communicated to the author.  No criticism of style allowed.

Huh?  I don't want to put words in your mouth, but this sounds like either
style isn't important enough to criticize, or at the least, that style
takes a back seat to coddling the egos.

> I've toyed with the idea of instituting code inspections but just
> couldn't bear to be the instrument of a good deal of unhappiness...

Instead of "instituting" them, why not simply allow them to happen.  As you
note later in the article, there are some cases where going over particular
code is a Very Good Idea, and other cases where it's Massively Boring and
Useless.  So let people figure out when they need to go over the code, and
at what level.  (Sometimes you want to go over the high-level--the data
structures, the general breakout.  Once in a while you really want to go
over each line of a small section in excruciating detail.)

Think of it this way:  Code inspection is a tool.  You don't use every tool
for every job.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Mr. Natural says, "Use the right tool for the job."

rjn@crl.labs.tek.com (01/29/91)

In article <14964@megatest.UUCP>, pat@megatest.UUCP (Patrick Powers)
writes:
>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.
>He/she is likely to be on the low end of the social tact scale and
>singularly unqualified to deal with this delicate situation.  Again,
>this may very well have attracted them to programming: it doesn't
>matter whether anyone likes their personality, all that counts is
>whether the program works.

I think this is really ridiculous.  Pigeonholing the "average" programmer
as some unprofessional, nerd-dweeb is a little out there.  If a person is
unable to perform professionally because of social/emotional problems
then perhaps they are in the wrong profession.  I don't think this describes
today's "average" programmer at all.

In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
> In practice we have not found programmer's egos to be a major problem
> to properly conducted Code Inspections.  This, of course, assumes that
> the Inspection process is actually following the defined cookbook approach,
> complete with moderator who keeps the discussion on track and non personal
> and a seperate reader who actually goes through the code (or design document:
> Inspections work well for them as well) one piece at a time. In addition, it
> is absolutely forbidden for someone's manager to help inspect his product or
> to use the # of defects found by an inspection as part of performance rating.
> 

I just don't see this as being realistic.  What other job where a product
(software in this case) is produced do you find people not being judged on
the quality of their output?  People absolutely have to be judged on how well
they perform their jobs and if putting out high quality software is their
job then their manager should be able to measure their job performance and
react accordingly.

I find the attitude expressed in the above two postings very disturbing.  They
seem to imply that people involved in software production somehow need to be
treated very differently from people involved in other types of production.
You aren't allowed to measure their work, you can't judge their work, you
can't evaluate their job performance based on their work.  This kind of
thinking is what is keeping software from becoming a true engineering
discipline and it is why so much software is so bad.  Programmers need to
realize that their job is producing high quality software, not just a
program that "works".  They need to be held accountable for their work.
I have found that true professionals don't mind code inspections and
walkthroughs, because they are confident in their ability and proud of
their work.


Disclaimer: This is my opinion only.  Tektronix may not share my opinion.


Jim Nusbaum, Computer Research Lab, Tektronix, Inc.  
[ucbvax,decvax,allegra,uw-beaver,hplabs]!tektronix!crl!rjn
rjn@crl.labs.tek.com
(503) 627-4612

lordbah@bisco.kodak.COM (Lord Bah) (01/29/91)

On the last project I worked on we held code inspections at each
implementation milestone.  While they did get boring on occasion
we didn't find them particularly stressful.  As they say, it's the
code that's being inspected, not the coder.  I don't have any
numbers, but the project had about two orders of magnitude fewer
problems reported during QA.

> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure.

There were between 5 and 7 of us over the course of development.
The inspections lasted about 4 hours and covered in the neighborhood
of 1000 lines of code each.  We found it ABSOLUTELY ESSENTIAL 
that each person participating receive a copy of the code a few
days before the inspection and go through it on their own before
the inspection, otherwise massive time gets wasted as people read
the code during the inspection and there are fewer useful contributions
because people don't have any understanding of the code.  On the
average call it 1 day of work for each person, or about 166 lines
of code per man-day (not bad, IEEE!).

> Not only that, but your average programmer was very likely attracted to
> programming in order to avoid social interaction and to create
> something under his/her personal control without anyone else watching.
> He/she is likely to be on the low end of the social tact scale and
> singularly unqualified to deal with this delicate situation.

An interesting insight.  Not always true, of course, and often
countered by the drive to "show-off" in those who consider themselves
clever.

> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection
> 2) Only errors are communicated to the author.  No criticism of style
> allowed.

BAH!  I must disagree with both of these.  The author must be present
to provide explanations when called for, and to note what the
inspection requires to be corrected.  Style issues are also fair
game.  Note that having a group coding standard helps immensely in
preventing religious wars during the code inspections (they basically
get moved to the time when you determine the coding standard).
You can then focus on proper functionality, maintainability,
adherence to standards, etc.

We found code inspections productive and useful without inducing
unnecessary stress.

--------------------------------------------------------------------
    Jeff Van Epps    amusing!lordbah@bisco.kodak.com
                     lordbah@cup.portal.com
                     sun!portal!cup.portal.com!lordbah

conca@handel.CS.ColoState.Edu (michael vincen conca) (01/29/91)

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:

> It seems to me that it could work with programmers directly out of college
> who feel in need of guidance.  It also might succeed in a large
> paternalistic organization as these would be more likely to attract
> group oriented engineers.  Note that the classic studies of code
> inspection occurred at mammoth IBM.
>

Speaking as both a programmer who is in college and a programmer who is in
the business world and has gone through numerous code inspections, I would 
have to say that they are a novice programmer's worst nightmare.

I agree that there is a feeling of needing some guidance when you first start,
but a code inspection is a difficult place to get it.  Generally, people
review your code looking for logic errors and the like, and usually they will
find a lot more in yours than in the other programmers code.  Of course, this
is to be expected since you haven't had much experience and the others
know what has been done in the past and what the general operating procedures
are. 

While there is nothing wrong with this, it may leave the inexperienced pro-
grammer with a feeling of inadequacy.  If the code inspection is handled
poorly or is particularly harsh (in the eyes of the novice), it could leave
him/her feeling incompetent and questioning their abilities or education.

One of the hardest things that I found to do was to review the code of senior
programmers.  This may sound silly, but when you get your first programming
job it isn't exactly easy to tell the person who hired you that they might
have done something wrong.  In a worst case senario, there may be some
grizzled programmer on the team who is unwilling to admit that a new kid
might know something they didn't.  Of course, any programmer who is unwilling
to accept new ideas is much of a programmer.

-=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=-
Mike Conca, Computer Science Dept.   *  conca@handel.cs.colostate.edu
Colorado State University            *  conca@129.82.102.32
   "Everyday, as the network becomes larger, the world becomes smaller."

barry@hpdmd48.boi.hp.com (Barry Kurtz) (01/29/91)

We've used code inspections with great success.  All of our code must be
inspected by a group of our peers prior to module integration and 
execution.  This has greatly reduced bugs in our code and improved the
uniformity and integration of code modules.  I highly recommend code
inspections for any serious software project involving a team of
developers.


Barry Kurtz
Hewlett-Packard

These comments are my own and do not necessarily reflect the opinions of
my company.

cml@tove.cs.umd.edu (Christopher Lott) (01/30/91)

In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
>> In addition, it
>> is absolutely forbidden for someone's manager to help inspect his product or
>> to use the # of defects found by an inspection as part of performance rating.
>> 

<7362@tekchips.LABS.TEK.COM> rjn@crl.labs.tek.com (Jim Nusbaum) replies:
>
>I just don't see this as being realistic.  What other job where a product
>(software in this case) is produced do you find people not being judged on
>the quality of their output?

I think Mr. Nusbaum misses the point slightly.  A manager or group
leader who introduces code inspections into a s/w environment must 
assure the group that "number of faults found during inspections"
will not suddenly dominate performance appraisals.  

Of course people must be judged on the quality of their work - but
define software quality for me  ;-)  
Number of faults detected in inspections is important, but beware 
of attaching too much importance to this figure.

Joe may make silly logic errors, but his skill at design and spotting
problems early are invaluable.  You don't want to stifle such a person's
abilities.  Likewise, if detecting faults in another person's work directly
results in negative performance ratings, think of the chilling effect this
could have on peer review in a friendly group - or the inflammatory opposite
result among antagonists.

chris...
--
Christopher Lott    Dept of Comp Sci, Univ of Maryland, College Park, MD 20742
  cml@cs.umd.edu    4122 AV Williams Bldg  301-405-2721 <standard disclaimers>

kutem@rtsg.mot.com (Jon Kutemeier) (01/30/91)

Distribution: 

In article <7362@tekchips.LABS.TEK.COM> rjn@crl.labs.tek.com writes:

   Path: motcid!uunet!zephyr.ens.tek.com!tekchips!snowbird!rjn
   From: rjn@crl.labs.tek.com
   Newsgroups: comp.software-eng
   Keywords: inspection, software engineering
   Date: 28 Jan 91 23:10:17 GMT
   References: <14964@megatest.UUCP> <40530@genrad.UUCP>
   Sender: news@tekchips.LABS.TEK.COM
   Reply-To: rjn@crl.labs.tek.com (Jim Nusbaum)
   Lines: 56

   In article <14964@megatest.UUCP>, pat@megatest.UUCP (Patrick Powers)
   writes:
   >Not only that, but your average programmer was very likely attracted to
   >programming in order to avoid social interaction and to create
   >something under his/her personal control without anyone else watching.
   >He/she is likely to be on the low end of the social tact scale and
   >singularly unqualified to deal with this delicate situation.  Again,
   >this may very well have attracted them to programming: it doesn't
   >matter whether anyone likes their personality, all that counts is
   >whether the program works.

   I think this is really ridiculous.  Pigeonholing the "average" programmer
   as some unprofessional, nerd-dweeb is a little out there.  If a person is
   unable to perform professionally because of social/emotional problems
   then perhaps they are in the wrong profession.  I don't think this describes
   today's "average" programmer at all.

I tend to agree.

   In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
   > In practice we have not found programmer's egos to be a major problem
   > to properly conducted Code Inspections.  This, of course, assumes that
   > the Inspection process is actually following the defined cookbook approach,
   > complete with moderator who keeps the discussion on track and non personal
   > and a seperate reader who actually goes through the code (or design document:
   > Inspections work well for them as well) one piece at a time. In addition, it
   > is absolutely forbidden for someone's manager to help inspect his product or
   > to use the # of defects found by an inspection as part of performance rating.
   > 

   I just don't see this as being realistic.  What other job where a product
   (software in this case) is produced do you find people not being judged on
   the quality of their output?  People absolutely have to be judged on how well
   they perform their jobs and if putting out high quality software is their
   job then their manager should be able to measure their job performance and
   react accordingly.

   I find the attitude expressed in the above two postings very disturbing.  They
   seem to imply that people involved in software production somehow need to be
   treated very differently from people involved in other types of production.
   You aren't allowed to measure their work, you can't judge their work, you
   can't evaluate their job performance based on their work.  This kind of
   thinking is what is keeping software from becoming a true engineering
   discipline and it is why so much software is so bad.  Programmers need to
   realize that their job is producing high quality software, not just a
   program that "works".  They need to be held accountable for their work.
   I have found that true professionals don't mind code inspections and
   walkthroughs, because they are confident in their ability and proud of
   their work.

Unfortunately, this a problem. Since the coding of software can take
many different forms, how do you judge "quality"? What one person
perceives as a higher "quality" of code may seem like a lower "quality"
of code to another. Right now, there is no one correct way to write a
program, unlike other engineering diciplines, which may have a single 
answer (Does this bridge support X lbs of weight? A simplified example...).
How to define quality for software is still nebulous right now. 
Do you base it on how well the program works? How efficiently it runs?
How well it is commented? All the above? Quality will mean different
things to different people, depending upon what their needs are.
That is why there is concern over rating programmers based on the
"quality" of their code.


   Disclaimer: This is my opinion only.  Tektronix may not share my opinion.

   Jim Nusbaum, Computer Research Lab, Tektronix, Inc.  
   [ucbvax,decvax,allegra,uw-beaver,hplabs]!tektronix!crl!rjn
   rjn@crl.labs.tek.com
   (503) 627-4612


Jon Kutemeier__________________________________________________________________
-----------------Software Engineer               /XX\/XX\  phone:(708) 632-5433
Motorola Inc.    Radio Telephone Systems Group  ///\XX/\\\ fax:  (708) 632-4430
1501 W. Shure Drive, Arlington Heights, IL 60004     uucp: !uunet!motcid!kutemj
--
Jon Kutemeier___________________________________________________________________
------------------Software Engineer               /XX\/XX\  phone:(708) 632-5433
Motorola Inc.     Radio Telephone Systems Group  ///\XX/\\\ fax:  (708) 632-4430
1501 W. Shure Drive, Arlington Heights, IL 60004      uucp: !uunet!motcid!kutemj

rmartin@clear.com (Bob Martin) (01/30/91)

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.
  [... stuff removed about how inspections would be boring and
	   painful since they are critical and non creative...]

Inspections are a way for many engineers to learn about the creation of
another.  They are also a way for engineers to insure that their creations
are complete and error free.  In my experience, inspections are not 
nearly as painful as shipping bugs to customers.  In fact I find inspections
to be quite painless.  When bugs are found everyone (including the author)
breaths a sigh of relief that the problem was caught early.

>
>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

This is a generalization which borders on bigotry.  All software engineers
are not socially inept.  I certainly wouldn't want to work for you if
I knew that this was your view.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection
>2) Only errors are communicated to the author.  No criticism of style allowed.

I think the author _must_ be present so that s/he can explain and defend
the work.  Critisism should be restricted to _real_ errors and violations
of written standards and procedures.
>

>I've toyed with the idea of instituting code inspections but just
>couldn't bear to be the instrument of a good deal of unhappiness.  

Inspections are instruments of happiness.  Customers are happier, managers
are happier, engineers are happier.  Inspections are investments in the
long term health of the product.  This is something that almost any
engineer can identify with.

>It
>seems to me that it could work with programmers directly out of college
>who feel in need of guidance.  It also might succeed in a large
>paternalistic organization as these would be more likely to attract
>group oriented engineers.  

Another unwarranted generalization.  Inspections work with anyone who
truly cares about the project/product they are working on.
>
>In spite of all this, I think code inspections would be accepted in any
>application where there is a clear need such as the space shuttle
>program where reliability is crucial and interfaces are complex.  

Are you saying you do not have a clear need to produce high quality
software.  Given that the cost of fixing errors is multiplied by
many orders of magnitude if the errors get to the field, don't you
have a clear need to protect your organization's investment by making
sure bugs are fixed as early as possible?

IMHO inspections should be performed by all software organizations, big
to small, working on any kind of project, critical to recreational.  There
is no excuse for not checking your work. 

-- 
+-Robert C. Martin-----+:RRR:::CCC:M:::::M:| Nobody is responsible for |
| rmartin@clear.com    |:R::R:C::::M:M:M:M:| my words but me.  I want  |
| uunet!clrcom!rmartin |:RRR::C::::M::M::M:| all the credit, and all   |
+----------------------+:R::R::CCC:M:::::M:| the blame.  So there.     |

bwf@cbnewsc.att.com (bernard.w.fecht) (01/30/91)

In article <29653@mimsy.umd.edu> cml@tove.cs.umd.edu (Christopher Lott) writes:
>
>Of course people must be judged on the quality of their work - but
>define software quality for me  ;-)  

It meets customer expectations.

But I think I agree with Chris's answer.  Certainly a programmer needs
to be evaluated based on their output, but process meters cannot be the
sole source for evaluating the programmer or else the process won't
be followed (or it may be "tricked" into showing things that are not
real.)  Some would even say that NO process meters should be used, which
may be true.  I'm sure there are plenty of indicators available that
have nothing to do with the "process".

Another argument is that process meters usually show "faults leaked
from one stage to another."  These indicators evaluate the process
and the team's ability to run it.  No single programmer should be held
accountable for a buggy module in the field -- many have been involved
in getting that module out there.  Even the most obvious indicator
to me, ie. maintainability, is a team job and is not likely to be the
result of one person's work.

wags@cimage.com (Bill Wagner) (01/30/91)

In article <7362@tekchips.LABS.TEK.COM> rjn@crl.labs.tek.com (Jim Nusbaum) writes:
>[reference from earlier post deleted]

>I just don't see this as being realistic.  What other job where a product
>(software in this case) is produced do you find people not being judged on
>the quality of their output?  People absolutely have to be judged on how well
>they perform their jobs and if putting out high quality software is their
>job then their manager should be able to measure their job performance and
>react accordingly.
>
>I find the attitude expressed in the above two postings very disturbing.  They
>seem to imply that people involved in software production somehow need to be
>treated very differently from people involved in other types of production.
>You aren't allowed to measure their work, you can't judge their work, you
>can't evaluate their job performance based on their work.  This kind of
>thinking is what is keeping software from becoming a true engineering
>discipline and it is why so much software is so bad.  Programmers need to
>realize that their job is producing high quality software, not just a
>program that "works".  They need to be held accountable for their work.
>I have found that true professionals don't mind code inspections and
>walkthroughs, because they are confident in their ability and proud of
>their work.
>
Your post seems to be combining two justifications for code inspections
into one.  When I have had my code inspected, it was to happen before
any testing of the code took place, (by definition, after the first
clean compile).  The justification for that was that other experienced
programmers could spot errors and suggest small - scale design improvements
before testing occurred.  (Major design suggestions should have already
been received at design reviews).  The end results were: 
1.  smaller, faster code
2.  less time spent debugging.

1. is a little tough to justify, but I believe it based on changes I 
made or suggested.  

2.  is easy to justify.  Any logic errors found before testing begins
is less time spent in the debugger, and in testing - > fixing -> re-
testing cycle.  

Now, if you wish to examine my (or anyone else's code) for 
judging the quality of work, I'd rather it was done after the 
testing cycle.  That way, you are judging the completed code,
as opposed to a first draft.  

I agree that programmer's need to accept responsibility for the 
quality of their work, but forced examinations aren't the best 
way for that to happen.  The idea of a review (whether it be a 
design review, or a code review) is to allow peers the oportunity
to comment on the proposed solution to a technical problem.  The
reviews should remain just that.  Now, if you wish to measure a 
programmer's performance based on the quality of code that the 
programmer has agreed is ready to be released, that is another
matter entirely.


-- 
          Bill Wagner                USPS net: Cimage Corporation
Internet: wags@cimage.com                      3885 Research Park Dr.
AT&Tnet:  (313)-761-6523                       Ann Arbor MI 48108
FaxNet:   (313)-761-6551

cox@stpstn.UUCP (Brad Cox) (01/30/91)

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.

While not in any way arguing against your point, or about the utility of
code inspections in general, isn't it about time that we started breaking
our enfatuation with the *process* of building software (source code,
style rules, programming language, lifecycle, methodology, software development
process, CASE, etc, etc) and started concentrating on the *product*
itself.

To me, the paradigm shift that we're facing is figuring out how to comprehend
software products, which unlike manufactured things like firearm parts, 
are intangible...undetectable by the natural senses.

I envision tools to assist in understanding the static and dynamic properties
of a piece of code the way physicists study the universe, not by asking 
how it was built (a process question), but by putting it under test to 
determine what it does.

Consider two views of a stack class. The conventional view leads us to
ask what language it was written in, and perhaps read the source to see
what it does.

I'm proposing another view from the *outside*. This view ignores the process
whereby it was constructed. This involves specifying its static (does it
provide methods named push and pop?) and dynamic properties (does pushing 
1,2,3 cause pop to return 3,2,1?)

Again, I'm not arguing against the need for a white-box view; only in
favor of a belts and suspenders approach in which we also beef up our
tools for capturing and reasoning about black-box information.
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

lfd@cbnewsm.att.com (leland.f.derbenwick) (01/30/91)

In article <1991Jan28.225231.19444@ico.isc.com>, rcd@ico.isc.com (Dick Dunn) writes:
> pat@megatest.UUCP (Patrick Powers) writes:
> ...
> > Though there is plenty of evidence that code inspections are cost
> > effective, I believe they would tend to be boring and stressful.
> > Boring because they are a time consuming and non-creative activity --
> > current issue of IEEE Software recommends 150 lines of code reviewed
> > per man-day as a good figure...
> 
> Well, we all know that lines of code is a lousy measure of anything except
> the number of newlines (don't we?:-), but still, if this measure is any-
> where close to real, it's a much stronger argument that Powers suggests
> against code inspections.  A halfway-decent programmer can produce several
> times that 150 l/d figure...proceeding through anything at 20 lines/hour
> (that's 3 minutes per line, effectively???) is too slow to feel productive.

The figure was 150 lines per person-day: effort, not time.  Since a
typical "full" code inspection involves (1) the author, (2) the
moderator, (3) the reader, (4,5) a couple of other inspectors, that
comes to 150 lines in about 1.6 hours per person, average.  That
seems quite reasonable, assuming 150 LOC inspected per hour, plus a
reasonable amount of preparation time for all participants.

(There have been some studies indicating that you can get by with
the author, a combined reader/moderator, and one other inspector,
or similar "reduced" inspections, without letting too many more
errors get by.  Assuming the same effort per individual, that
increases the inspection productivity to about 250 LOC per person-
day.)

On a separate issue, most references indicate an average productivity
across an entire project (counting effort for documentation, etc.)
somewhere in the range of 5 to 20 LOC per person-day.  Given that
coding is something like 1/6 of total effort, that still leaves
typical coding rates (assuming the modules are fully designed) in
the range of 30 to 120 LOC per person-day, much less than you seem
to assume.  (Certainly there are bursts at much higher rates, and
a few people [Ken Thompson?] can probably sustain much higher rates.
But it isn't common.)

 -- Speaking strictly for myself,
 --   Lee Derbenwick, AT&T Bell Laboratories, Warren, NJ
 --   lfd@cbnewsm.ATT.COM  or  <wherever>!att!cbnewsm!lfd

lfd@cbnewsm.att.com (leland.f.derbenwick) (01/30/91)

In article <7362@tekchips.LABS.TEK.COM>, rjn@crl.labs.tek.com writes:
> In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
> > In practice we have not found programmer's egos to be a major problem
> > to properly conducted Code Inspections.  This, of course, assumes that
> > the Inspection process is actually following the defined cookbook approach,
> > complete with moderator who keeps the discussion on track and non personal
> > and a seperate reader who actually goes through the code (or design document:
> > Inspections work well for them as well) one piece at a time. In addition, it
> > is absolutely forbidden for someone's manager to help inspect his product or
> > to use the # of defects found by an inspection as part of performance rating.
> > 
> 
> I just don't see this as being realistic.  What other job where a product
> (software in this case) is produced do you find people not being judged on
> the quality of their output?  People absolutely have to be judged on how well
> they perform their jobs and if putting out high quality software is their
> job then their manager should be able to measure their job performance and
> react accordingly.

It depends what you see as the output.  In my view, that's the code that
makes it past inspection, integration, and test, and goes out to the
customer.  A higher-than-average rate of errors found in inspection might
mean that the programmer is bad, or it might mean that he/she writes
code that is so clearly readable that the inspection catches a greater
than average percentage of the errors.  Similarly, a higher-than-average
rate of errors caught in integration and test might mean that the code
is bad, or the inspection was sloppy, or that the code was designed to
be thoroughly testable.

If you make errors caught at an early stage "evil", then people will do
everything they can to avoid catching them there: writing unclear code,
avoiding inspections, forcing inspections to be rushed, arguing over
what is and isn't a bug, etc., etc.  And you might as well not bother
doing the inspections at all.

 -- Speaking strictly for myself,
 --   Lee Derbenwick, AT&T Bell Laboratories, Warren, NJ
 --   lfd@cbnewsm.ATT.COM  or  <wherever>!att!cbnewsm!lfd

asylvain@felix.UUCP (Alvin "the Chipmunk" Sylvain) (01/30/91)

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:
> It seems that the problem with code inspections is largely emotional.
> Though there is plenty of evidence that code inspections are cost
> effective, I believe they would tend to be boring and stressful.
> Boring because they are a time consuming and non-creative activity --
> current issue of IEEE Software recommends 150 lines of code reviewed
> per man-day as a good figure.  I know I would not want to do this, and
> who would?  Stressful because it is out of the programmer's control,
> and because criticism is involved.  People identify closely with their
> creations and find criticism painful.

Any kind of criticism must be tempered by the fact that, regardless of
how "lone wolfish" the programmer may be, s/he is ultimately part of a
team.  It is the team which reviews the work, and the team which finds
problems.

Personality conflicts are a management problem, including letting the
people know that all criticism _shall be_ viewed constructively.

[...]
> In order to reduce these problems the following has been suggested:
> 1) The author not be present at the inspection

No, the author must be there.  If s/he can't handle criticism, espe-
cially in a team context such as this, s/he needs to grow up some.
Again, this is part of management's job, to make sure that everyone
knows:
   (a) Everybody makes mistakes, including You,
   (b) We are earnestly looking for All mistakes, including Yours, so we
       can remove them, and
   (c) When we find Your mistakes, it doesn't mean You're Stupid or that
       We Don't Love You.  It just means that you fall into category (a)
       like the rest of us, and, hallelujah, we succeeded in category (b).

> 2) Only errors are communicated to the author.  No criticism of style allowed.

I agree with this to a degree.  If the style is making understanding
difficult, that is a valid point to bring up with the author.  Badly
spaghettied code, for example, must be avoided unless absolutely neces-
sary.  Inconsistent style can lead to errors in understanding, and
again, must be avoided.

> I've toyed with the idea of instituting code inspections but just
> couldn't bear to be the instrument of a good deal of unhappiness.
[...]

I assume then that you are in management.  Therefore, it is up to you to
mitigate this unhappiness.  Believe me, it *can* be done.  I suspect
that even the lonliest programmer appreciates an excuse to crawl out
from under the terminal, so long s/he feels there is a valid reason
to do so.  Finding and removing errors is a valid reason.  Perfecting
the team product is a valid reason.

Just let them know that the criticism _shall be_ constructive, and that
it _shall be_ rendered in a professional manner.  No "nyahh-nyahh's"
allowed!
--
asylvain@felix.UUCP (Alvin "the Chipmunk" Sylvain)
=========== Opinions are Mine, Typos belong to /usr/ucb/vi ===========
"We're sorry, but the reality you have dialed is no longer in service.
Please check the value of pi, or see your SysOp for assistance."
=============== Factual Errors belong to /usr/local/rn ===============
UUCP: uunet!{hplabs,fiuggi,dhw68k,pyramid}!felix!asylvain
ARPA: {same choices}!felix!asylvain@uunet.uu.net

rh@smds.UUCP (Richard Harter) (01/30/91)

In article <1991Jan28.225231.19444@ico.isc.com>, rcd@ico.isc.com (Dick Dunn) writes:
> pat@megatest.UUCP (Patrick Powers) writes:
> ...
> > Though there is plenty of evidence that code inspections are cost
> > effective, I believe they would tend to be boring and stressful.
> > Boring because they are a time consuming and non-creative activity --
> > current issue of IEEE Software recommends 150 lines of code reviewed
> > per man-day as a good figure...

> Well, we all know that lines of code is a lousy measure of anything except
> the number of newlines (don't we?:-), but still, if this measure is any-
> where close to real, it's a much stronger argument that Powers suggests
> against code inspections.  A halfway-decent programmer can produce several
> times that 150 l/d figure...proceeding through anything at 20 lines/hour
> (that's 3 minutes per line, effectively???) is too slow to feel productive.

Reality check time.  One can write several hundred lines of code in one
session.  However that is exceptional.  Typical industry figures are 
5-10 thousand lines of delivered code per year which is 25-50 lines/day.
Programmers who can average 100 lines/day are quite exceptional.

Interestingly enough these figures don't seem to vary a great deal with
language.  The rate is somewhat higher for assembly language and for 
verbose languages such as COBOL, but enough to compensate for the reduced
expressiveness.

Since many of us can and have written several hundred lines of code at
one sitting, why is the average rate so low?  One obvious reason is that
once you have written the code you have to compile and debug it.  Another
is that a fair percentage of ones time gets eaten up in non-programming
activities.  Still another is that there is always a certain amount of
low level design work that must be done while writing code.  (Anywhere
from 20-70% of the design work is done at coding time, depending upon
the methodology used.)  Still another factor is that quite a fair percentage
of the code that is written ends up not being deliverable.

Let's check.  150 lines is 3-5 procedures worth of structured modular
code, i.e. about 1 low level modules every two hours *on average*.  In a
decent code review you have to verify that all external interfaces are
correctly referenced and used, that each line of code is correct, and
that the code makes sense.  You also want to verify that the modular
decomposition is appropriate and that the modules fit into the over-all
design.  Granted that some reviews will go quite quickly.  However the
average will probably be closer to that 150 lines/day.
structure is correct.  
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

frank@grep.co.uk (Frank Wales) (01/31/91)

Here are some datapoints and opinions based on personal experience.

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:
>Boring because they are a time consuming and non-creative activity --

This implies that criticism has no part to play in creation, which I
do not believe.

>current issue of IEEE Software recommends 150 lines of code reviewed
>per man-day as a good figure.

The last project where we reviewed the code has a figure of about 10 lines
of code reviewed per minute (based on reviewing 8,500 lines of
product, which was done by the three authors in three working days).
Whoever reviews at a rate of one line per three minutes had better have
some pretty long lines of code.

>People identify closely with their creations and find criticism painful.

Criticism is in part an educational process; programmers who don't want
to learn what they do wrong, or what other people think of their work,
aren't the kind I'd depend on to produce quality products.  If the people
involved in the project give a damn about doing their best, they will
quickly come to enjoy the code review process as a learning experience;
maybe it will also deepen the respect they have for each other's work,
which is a valuable team-building tactic that good managers can exploit.

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.
>[other vaguely insulting generalisations deleted]

It seems to me that maybe you're working with too many low-grade
code-grinders.  Hire some actual software professionals in their place.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection

Bad idea.  Very bad.  To apply a courtroom metaphor, it would be like
denying the accused the ability to be present, and give his own version
of the events.  At the code reviews I run, the author of the code is
the reader of the code.  It is the responsibility of the others present
to convince the author that code is poor, where this is appropriate.
The principal software engineer is responsible for deciding issues of
style, and the project manager has final say on what goes in the actual
product.  Interruptions are not allowed to enter the code review room,
while disagreements are not allowed to leave it; they must be resolved
before people go back to writing software.  

>2) Only errors are communicated to the author.  No criticism of style allowed.

Also a bad idea.  If I can't understand something, I don't care if it works,
because my confidence in it is reduced.  Style may not matter at run-time,
but it certainly matters at read-time and think-time.  If you go under a bus,
I don't want to have to hire a medium to figure out how to fix your code.

Other random comments: I use scheduled code reviews as a place to
resolve implementation details which were decided upon on the fly by
a programmer when the design documents or other colleagues could not
give an authoritative answer when the code was written.  In this regard,
they are a valuable way of helping to analyse the many small-but-important
decisions that get taken during software construction.  A second
important use is to allow each programmer to become familiar with the
work of his colleagues, which is a combined educational and confidence-
building exercise.  And a third use is to allow programmers to teach each
other tricks and techniques that have never been explicitly communicated or
written down anywhere else but in the code itself.  Hold the whole review
off-site if you can, in a quiet room with plenty of paper, pencils, 
a flipchart and {black|white}board, coffee, Pepsi and lots of donuts.

I believe code review is a valuable tool, and avoiding it for what amounts
to egotistical reasons serves neither the developers nor the customer.
[FYI: I usually review my own code during a project, even if I am
 the sole author -- just like Oscar Wilde, I enjoy a good read :-).]
--
Frank Wales, Grep Limited,             [frank@grep.co.uk<->uunet!grep!frank]
Kirkfields Business Centre, Kirk Lane, LEEDS, UK, LS19 7LX. (+44) 532 500303

ppblais@bcars305.bnr.ca (Pierre P. Blais) (01/31/91)

I have had some experience with code inspections which I would
like to relate to you.

From my experience, I think that code inspections should be started
after a certain amount of (sanity) testing is done. This prevents
wasting the inspector's time in uncovering (obvious) defects. This
is the same argument as using the compiler to find syntax errors
instead of having a human spend time reading the code looking for
them.

Now, how do you decide when enough testing has been done and code
inspection should start?  From empirical data, one can determine
how many defects are detected per person-hour of testing and code
inspection.  When less defects are detected in one hour of testing
than there would be during an inspection, it is time to start in-
specting.

Code inspections have the advantage that they spread the knowledge
about the software to people other than the author.  Selection of
inspectors should done on that basis. In addition, code comments
are reviewed for usability; there is no way to test for that by
running the code.

The main drawback to inspections is the drain on human resources.
If a code inspection team consists of four people including the
author, the author usually ends up "owing" three people.  In this
case, developers usually spend three times more time inspecting
other people's code than theirs.

Also, some projects may be so large that a pace of 150 lines of
code per hour makes it impossible to inspect all code within a
reasonable period of time (taking into account no more than one
three hour inspection session per day to combat fatigue and bore-
dom).

All in all, when used judiciously, at the right time, and when they
are planned properly, code inspections are well worth their time.

--
Pierre P. Blais                                  Bell-Northern Research
-----------------------------------------------------------------------
BITNET:    ppblais@bnr.ca                        VOICE:  (613) 763-4270
UUCP:      uunet!bnrgate!bcars305!ppblais        FAX:    (613) 763-2626
LAND:      P.O. Box 3511, Station C, Ottawa, Canada, K1Y 4H7
-----------------------------------------------------------------------
"Design defect fixes; don't just throw code at them."

travis@delta.eecs.nwu.edu (Travis Marlatte) (02/01/91)

I have worked at several places that used code inspections. At each place,
there were some that had a hard time dealing with inspections from an
emotional perspective. However, the inspections were done well and 
these same people eventually became major contributors to the inspections
and to the team environment in general.

Does this say that introverted programmers have no place in a development
team? And, that open, sharing programmers are the only ones that are
generally usable? No, I don't think so. However, if it is a team, then
all players must participate at some level.

I am of the opinion that all people can achieve an attitude of open
participation in programming teams and inspections. One colleague who
was scared to death to go into her first inspection decided that she would
rather open her code to local critism than to have the customer find the
faults. She went on to become a project manager and now operates as an
independent consultant.

You too can achieve popularity. Your friends won't recognize you. You'll
be the life of the party. Just send for our free booklet, "I once was
an inspection dropout." Please include $25.00 for postage and handling.

I don't buy the observation that some programmers where born "this" way and
so they have to stay "this" way.

Travis Marlatte

marick@cs.uiuc.edu (Brian Marick) (02/01/91)

ppblais@bcars305.bnr.ca (Pierre P. Blais) writes:

>From my experience, I think that code inspections should be started
>after a certain amount of (sanity) testing is done. This prevents
>wasting the inspector's time in uncovering (obvious) defects. This
>is the same argument as using the compiler to find syntax errors
>instead of having a human spend time reading the code looking for
>them.

I have a similar approach.  I find that inspections and testing are
good at discovering different kinds of faults.  For example, dynamic
testing is poor at discovering what Dewayne Perry calls "obligation
faults", cases where, for example, heap-allocated memory is not freed
or open files are not closed.  But dynamic testing (when backed up by
tools to measure test suite coverage) is effective at discovering
off-by-one errors or wrong-variable-used errors; consequently, it's a
waste to check for these during code reads.  (It's a waste because, I
believe, the long-term cost of detecting a fault with a code read is
higher than with dynamic testing.  The reason is maintenance: that
dynamic test, if written sensibly, can be rerun quite cheaply.
"Rerunning" a code read when you change a module is roughly the same
as the cost of the original code read.)

What I do nowadays is keep a catalog of explicit questions to ask when
reading the code.  I test a module by writing black box tests (pretty
much the standard technique, except that I maintain a catalog of
special test cases for data structures, operations, and combining
rules that recur in specifications), then I look inside the code to
write more black-box-style tests based on the cliches I find there.
(The idea being that people implementing cliched code tend to make
cliched mistakes.)  At this point, while I'm reading the code anyway,
I apply the inspection checklist to look for those kinds of faults I
expect the other tests won't catch.  Then I run the tests and add new
ones until I've satisfied branch, loop, multi-condition, and weak
mutation coverage.  

This seems to work pretty well and hasn't been as time-consuming as
I'd earlier expected.

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

wex@dali.pws.bull.com (Der Grouch) (02/02/91)

In article <6109@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:
   While not in any way arguing against your point, or about the utility of
   code inspections in general, isn't it about time that we started breaking
   our enfatuation with the *process* of building software (source code,
   style rules, programming language, lifecycle, methodology, software development
   process, CASE, etc, etc) and started concentrating on the *product*
   itself.

Well, I'm not sure I totally understand your statement.  To me, every
product is the end result of a process.  In particular, if we are going to
use tools of some sort, we are going to thereby impose a process on the tool
users.  For example, you say...

   I envision tools to assist in understanding the static and dynamic
   properties of a piece of code the way physicists study the universe, not
   by asking how it was built (a process question), but by putting it under
   test to determine what it does.

This paragraph, to me, seems to be advocating one kind of process over
another.  I'm not saying that you're right or wrong in advocating the "test
and see what it does" process, but I am saying that it's a process like any
other.

Have I missed your point entirely?

--
--Alan Wexelblat			phone: (508)294-7485
Bull Worldwide Information Systems	internet: wex@pws.bull.com
"Honesty pays, but it doesn't seem to pay enough to suit some people."

dparter@shorty.cs.wisc.edu (David Parter) (02/02/91)

At my previous place of employment, some type of code review/read/inspection
was accepted as normal. Unfortunatly, there was no clear agreement on
what exactly this event was all about (thus the multiple choice name),
so some code reviews were more detailed than others.

Here, however, I offer some observations about the process. Of course,
this is all anecdotal, and your mileage will vary...

    1.  The more comfortable the author (and the reviewers) are with
	the others involved in the review, and with the review process,
	the more productive the review will be, and the more likely to
	avoid ego battles. Of course, if everyone is everyone's best
	friend and therefore refrains from offering valid criticism,
	then it is a waste of everyone's time.

    2.  The more experience all the participants have with code
	reviews, the more effective they are. Once everyone knows what
	to expect ("yes, they will criticize my comments, and if my
	comments aren't telling them what the code does, then I guess
	they are right, my comments aren't good enough"), and has had a
	few chances to be both reviewer and author, they are
	comfortable with offering and taking criticism.

    3.  MAKE SURE TO STAY WITHIN THE AGREED PURPOSE OF THE REVIEW. The
	worst review I have ever heard of (I was not there, I was down
	the hall and heard a lot of it) was the following situation:

	The code author was a new hire (w/in 6 months), just out of
	college. It was the first time his code had been subject to
	review.  His manager was not at the review (he was not in town,
	in fact).  One of the reviewers was a senior person who had
	been invovled in the initial project plan (of which this was a
	small, but independent part), but not at any of the subsequent
	reviews related to this part (functional specification,
	design).

	The senior reviewer proceeded to turn the code review into a
	design review, ripping the code and design to shreds, and
	producing a new design at the meeting.  No one stopped him, and
	no one defended the programmer. Needless to say, the programmer
	was very upset. When his manager returned, and heard what had
	happened, he was not pleased either -- but he knew that it
	wasn't the programmer's fault, and told him so.

    4.  Good code reviews, with "tough" reviewers often leads to better
	code because the author makes an effort to prepare for the
	review.  He or she may read through the code, anticipating the
	comments of the reviewers, and improving things beforehand.
	What would have been "good enough" in the past gets improved by
	the best person to do the improving -- the original coder.

    5.  One poster mentioned that the novice programmer presenting his
	or her code for review the first time may be very upset by the
	review. One way to soften the blow is have the novice
	participate as an extra reviewer (it was also mentioned that he
	or she may be wary of criticizing senior people, so "extra"
	because they may not be that productive) at several reviews of
	other parts of the project, with (some or all of) the same
	reviewers who will be reviewing his or her code, so that he or
	she will be used to their personal style and will know what to
	expect.

    6.  If the reviewers do not understand the code, or how it fits
	into something else, DO NOT RELY ON THE AUTHOR to clarify. 
	
	I was the author for a sets of changes to some existing code.
	The review focused on the parts I changed, not on the program
	as a whole. I was called upon to give an overview of how it all
	fit together, and what I had changed. My overview was accepted,
	the code was approved, and a few days later I found numerous
	errors that should have been found during the code review (or
	design review, which is sometimes difficult for modifications
	to existing code) -- and weren't, because they believed my
	explanation of how things worked, which turned out to be wrong.
	Since no members of the review team had an understanding of the 
	"big picture," at least one of them should have been charged 
	with doing a more detailed review of the "big picture" in order 
	to provide the assurances that various assumtions and/or 
	asertions were valid.

Good luck in your reviewing,

	--david
-- 
david parter					dparter@cs.wisc.edu

cl@lgc.com (Cameron Laird) (02/02/91)

In article <1991Feb1.214750.28536@spool.cs.wisc.edu> dparter@shorty.cs.wisc.edu (David Parter) writes:
			.
			.
			.
>    6.  If the reviewers do not understand the code, or how it fits
>	into something else, DO NOT RELY ON THE AUTHOR to clarify. 
>	
>	I was the author for a sets of changes to some existing code.
>	The review focused on the parts I changed, not on the program
>	as a whole. I was called upon to give an overview of how it all
>	fit together, and what I had changed. My overview was accepted,
>	the code was approved, and a few days later I found numerous
>	errors that should have been found during the code review (or
>	design review, which is sometimes difficult for modifications
>	to existing code) -- and weren't, because they believed my
>	explanation of how things worked, which turned out to be wrong.
>	Since no members of the review team had an understanding of the 
>	"big picture," at least one of them should have been charged 
>	with doing a more detailed review of the "big picture" in order 
>	to provide the assurances that various assumtions and/or 
>	asertions were valid.
			.
			.
			.
This is an IMPORTANT rule, and one which applies far, far beyond
inspections.  One of the best things we can do for each other is
to cultivate the attitude that authors don't get to explain them-
selves (well, they do, but only on special occasions).  Authors
*must* push themselves to write so that they can be understood;
if readers/reviewers/inspectors don't understand, that is (in gen-
eral) a sign that the author needs to rewrite (most often:
comment more clearly) what he or she has written.  One of the
nice things about this rule is that it's easy to teach:  each time
an author starts, "Well, what I'm trying to do there is ...", his
or her colleagues need immediately to remind, "Then *say* ..., IN
THE SOURCE."
--

Cameron Laird		USA 713-579-4613
cl@lgc.com		USA 713-996-8546 

alan@tivoli.UUCP (Alan R. Weiss) (02/03/91)

In article <14964@megatest.UUCP> pat@megatest.UUCP (Patrick Powers) writes:
>It seems that the problem with code inspections is largely emotional.

Actually, my belief is that it is *also* temporal:  developers believe
that it takes up a lot of time.  When proven otherwise, they are more
receptive.  At that point, *some* people have reservations based upon
BAD inspections, or poorly-trained inspectors and moderators, or
simply the THOUGHT of inspections.  Also heresay plays a part. 

>Though there is plenty of evidence that code inspections are cost
>effective, I believe they would tend to be boring and stressful.

Its a funny thing about boredom.  In a small start-up like ours
(Tivoli Systems), everyone has a stake in the success of our firm.
If something is proven cost-effective, our developers are absolutely
behind it 110%.  If it is a time-waster, they chafe.  The challenge
is two-fold:  first, to PROVE the merits of inspections, and that can
be done in a number of ways:  case histories, measuring your own
development/quality statistics, cost analysis, faith, etc. :-)

The second challenge is more fundamental:  how do you get developers
to view themselves as software engineers (i.e. professionals)?
How do you get developers in BIG corporations with a diffused sense of
ownership and responsibility (and reward) to get excited about cost savings?
And THAT is a management challenge.  Good management is constantly selling
ideas and motivating their staff, challenging them to link the development
plan with the business plan in their own minds, and thence to THEIR plans. 

>Boring because they are a time consuming and non-creative activity --
>current issue of IEEE Software recommends 150 lines of code reviewed
>per man-day as a good figure.  I know I would not want to do this, and
>who would?

So, I can assume that you are advocating that no one actually READS
the source code?  No? Then why not actually track issues and actions?
Inspections, done well, SAVE time. Guaranteed!  Besides, the time
saved is the back-end development time (you know, the ol' release/
test the bugs/return to development/fix the bugs/ad nauseum cycle.
Developers get REAL bored with fixing bugs all the time, right?)
	
At Tivoli, I am VERY fortunate to have a group of developers and
managers who believe in inspections (a QA Manager's dream!).  We
started this process, and guess what?  The developers are totally
convinced that it will save them LOTS of time later.  We're finding
bugs in all kinds of deliverables (specifications, manuals, source, etc).

I promise to keep this newsgroup appraised of the process (special
thanks goes out to Kerry Kimbrough, a very brave Development Manager
indeed who started this process at Tivoli).

>Stressful because it is out of the programmer's control,
>and because criticism is involved.  People identify closely with their
>creations and find criticism painful.

Bzzt!  Wrong.  In Fagin Inspections (as modified by Tom Gilb),
ONLY the developer gets to prioritize and rank the incoming defects.
The actual inspections occur off-line individually, and the
Defect Logging Meeting is simply a fast recording of defects.  Afterward,
the developer MUST respond to every item, but can in fact choose his/her
response based upon engineering principles.  QA's function is to serve
in a consulting capacity, continually working with the community to
correlate requirements with design with implementation.

Besides, haven't you read Gerald Weinberg's "The Pyschology of
Computer Programming?"  Ever here of ego-less programming?  All
programming is iterative!!!  Its just a question of solving problems
earlier in the product's life, or later (i.e. in support).

>Not only that, but your average programmer was very likely attracted to
>programming in order to avoid social interaction and to create
>something under his/her personal control without anyone else watching.

Maybe.  But I don't *think* so.  Programming is
an intensely SOCIAL activity.  In anything other than a one-person
shop, programmers MUST interact with each other.  Sure, the culture
is different that interactions between, say, hairdressers in a salon
(hello, Laurelyn!).  But there is ALWAYS a prevailing culture, and
that implies society. Again, Weinberg is the guru in this.

>He/she is likely to be on the low end of the social tact scale and
>singularly unqualified to deal with this delicate situation.

Lucky thing I'm reading this before my staff sees this!  They would
take exception to this, and so do I.  Programmers may be different,
but the stereotype of a hacker-nerd is insulting and gross.

>Again, this may very well have attracted them to programming: it doesn't
>matter whether anyone likes their personality, all that counts is
>whether the program works.

Maybe in school.  They don't teach software engineering, or how to actually
run a business based around software in most CS programs.  Programmer's
find out FAST how the real world works when someone actually pays them
(large sums of) money.  They find out that "working" is trivial;
its optimization, schedule, cost, maintainability, and a number of other
factors that count. Including teamwork, dude.

>In order to reduce these problems the following has been suggested:
>1) The author not be present at the inspection
>2) Only errors are communicated to the author.  No criticism of style allowed.

You need to study Fagin.  Also, Kerry (and some of the net.people :-)
turned me onto Tom Gilb:  absolutely fabulous stuff.  Your ideas
are too primitive.  You really need to study this subject first.
Lemme know if you want help!

>I've toyed with the idea of instituting code inspections but just
>couldn't bear to be the instrument of a good deal of unhappiness.

I assume that you don't want to tell them that they are laid-off,
either, due to lack of sales, right?  

Is a friend someone who tells you nice-sounding lies, or is a friend
someone who tells you the truth?

Whatever happened to courage?  You get courage by being sure of your
facts, by researching matters, and by measuring success and then
selling the hell out of it.  Test the waters!

>It seems to me that it could work with programmers directly out of college
>who feel in need of guidance.  It also might succeed in a large
>paternalistic organization as these would be more likely to attract
>group oriented engineers.  Note that the classic studies of code
>inspection occurred at mammoth IBM.

Yet often even IBM does not do inspections (I speak from personal
experience).  Yes, they have studied inspections and methodology for
over 20 years (so has TRW), but then again they have the money to
do pure research (see also Software Engineering Institute at
CMU and Purdue's program).  Still, inspections work regardless
of organization size.


>In spite of all this, I think code inspections would be accepted in any
>application where there is a clear need such as the space shuttle
>program where reliability is crucial and interfaces are complex.

Another funny thing, Patrick:  why do people think that life-threatening
applications are more important that business-threatening applications?
To a small business owner, software bugs can literally KILL his/her
business. To THEM, there is a clear need, right?  They would rather die
then see their "baby" croak.

>...  On the other hand in routine applications with a good
>deal of boiler plate code they could be a "real drag", exacerbating the
>humdrum nature of the task.

Maybe.  But "boiler-plating" should be treated as such.  However,
I've seen cases where people *think* its template, but its not.
Expensive, evil cases.  Costly cases.  And THAT is a REAL drag :-)

	.-------------------------------------------------------.
	|  Alan R. Weiss                                        |
	|  Manager, QA and Mfg.  _______________________________|
	|  Tivoli Systems, Inc.	| These thoughts are yours for  |
	|  Austin, Texas, US	| the taking, being generated   |
	|  512-794-9070		| by a program that has failed  |
	|  alan@tivoli.com	| the Turing Test. *value!=null;|
	|_______________________________________________________|
	|#include "std.disclaimer" --- Your mileage may vary!   |
	.-------------------------------------------------------.
 

rst@cs.hull.ac.uk (Rob Turner) (02/05/91)

pat@megatest.UUCP (Patrick Powers) writes:

> [a lot of stuff about code inspections, which I generally agree with]

I believe that with software as it is currently written, code
                             --------------------------
inspections are by far the best way of removing bugs. A few competent
programmers around a table discussing a piece of code will quickly
iron out any faults.

However, code inspections should not really be necessary. If you have
*designed* your system properly, and have a collection of relatively
small modules with well defined interfaces, then coding these modules
should be a particularly straightforward task with little (though
admittedly some) scope for error.

My point is that inspections should be performed at the *design*
stage, before any coding has been carried out (even before the
implementation language has been chosen). Of course, the end product
of the design should be concrete enough for any programmer to be able
to produce the requisite code from with little thought. Design
solutions which are at too high a level and leave a great deal to the
imagination of the programmer are no good (even though the programmer
may feel he/she has more room to express his/her talents).

Rob

kambic@iccgcc.decnet.ab.com (02/06/91)

In article <349@tivoli.UUCP>, alan@tivoli.UUCP (Alan R. Weiss) writes:
> In article <29653@mimsy.umd.edu> cml@tove.cs.umd.edu (Christopher Lott) writes:
>>In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
[...]
> I absolutely agree.  Still, as people get better at inspections,
> the name of the game really IS getting those defect Finds during
> inspections way up.  The trick is to not make it personal, but acknowledge
> that software engineering is iterative and social in nature (see Weinberg,
> Boehm, et. al).
> 
[...]
There seems to have been little discussion in this thread on the training 
required for doing inspections.  Creating the atmosphere for, and then 
running proper inspections takes time, money, training, schedule impact,
and potentially changes in either the psychology and/or staff of  the 
organization.  Are they valuable?  Absolutely.  Are they free?  Not initially.
The resultant value for some organizations has been measured as is part of the 
literature.  IMHO most organizations have to go through some type of 
justification of the value, and should continue to measure the value in
terms of $ and time.

GXKambic
Allen-Bradley
Standard disclaimers.

donm@margot.Eng.Sun.COM (Don Miller) (02/06/91)

>[...]
>There seems to have been little discussion in this thread on the training 
>required for doing inspections.  Creating the atmosphere for, and then 
>running proper inspections takes time, money, training, schedule impact,
>and potentially changes in either the psychology and/or staff of  the 
>organization.  Are they valuable?  Absolutely.  Are they free?  Not initially.
>The resultant value for some organizations has been measured as is part of the 
>literature.  IMHO most organizations have to go through some type of 
>justification of the value, and should continue to measure the value in
>terms of $ and time.
>
>GXKambic
>Allen-Bradley
>Standard disclaimers.

   There has also been no discussion on how to make most effective
   use of the time and personnel resources allocated to code reviews.
   Is everyone really still just giving out hard copy of source,
   hoping reviewers can figure it out, and then getting together
   to hammer it out?

   I envision a code review process which makes use of tools and
   practices designed to minimize resource consumption.  Static 
   analysis tools would be valuable towards understanding foreign
   code.  An on-line reviewer which cataloged comments of the
   reviewer would facilitate pre-meeting information gathering.
   A projection system accessing the actual code could be used
   as a guide during the review.

   I'm proposing automation, or even just optimal manual techniques,
   as a way of addressing the primary concern regarding code reviews.
   Some of the tools that I've mentioned above exist and the others
   aren't difficult to imagine.  Does anyone out there use these
   techniques or are we still in the stone age?

--
Don Miller                              |   #include <std.disclaimer>
Software Quality Engineering            |   #define flame_retardent \
Sun Microsystems, Inc.                  |   "I know you are but what am I?"
donm@eng.sun.com                        |   

rcd@ico.isc.com (Dick Dunn) (02/06/91)

rh@smds.UUCP (Richard Harter) writes:
> [I wrote, about the matter of reviewing 150 lines/day]
> > A halfway-decent programmer can produce several
> > times that 150 l/d figure...proceeding through anything at 20 lines/hour
> > (that's 3 minutes per line, effectively???) is too slow to feel productive.

> Reality check time.  One can write several hundred lines of code in one
> session.  However that is exceptional.  Typical industry figures are 
> 5-10 thousand lines of delivered code per year which is 25-50 lines/day.
> Programmers who can average 100 lines/day are quite exceptional.

I agree so far...but that's making the case for slow code reviews even
tougher.  The more "exceptional" the programmer is, the deeper you cut into
productivity.

> Since many of us can and have written several hundred lines of code at
> one sitting, why is the average rate so low?  One obvious reason is that
> once you have written the code you have to compile and debug it.  Another
> is that a fair percentage of ones time gets eaten up in non-programming
> activities...

I think these factors, and others, are taken into account.  Certainly when
I say one can write several hundred lines in a day, I *don't* just mean
plunking the characters into a source file!  I mean producing finished
code.  The issue you're leaving out is that the best programmers are some
two orders of magnitude more productive than average.  This is an unusual
situation, in the sense that you don't find this range of productivity
difference in many other disciplines.  While we do have to take account of
the "average" programmer, I don't think we should be developing processes
which work so much to the disadvantage of the exceptional programmer.

> ...In a
> decent code review you have to verify that all external interfaces are
> correctly referenced and used,...

Most of this should be handled automatically, shouldn't it?  Perhaps I'm
missing your point.

>...that each line of code is correct...

This is at far too low a level for a useful code review.  Get better pro-
grammers who can be trusted to write code that doesn't need microscopic
examination.

> ...You also want to verify that the modular
> decomposition is appropriate and that the modules fit into the over-all
> design...

But isn't a "code review" too late for this?  Why would you be re-examining
the decomposition (the framework) after you've built all the detail onto
the framework.  I'd rather sit down before coding and bounce ideas off
someone to see if the code layout makes sense *before* I write it.  Whether
you're building top-down, bottom-up, interior-out, or edges-in, the level
you're working on needs to be sound before you move to the next level.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Don't lend your hand to raise no flag atop no ship of fools.

dave@cs.arizona.edu (Dave P. Schaumann) (02/06/91)

In article <1991Feb1.151945.22863@cs.uiuc.edu> marick@cs.uiuc.edu (Brian Marick) writes:
>[...]
>I apply the inspection checklist to look for those kinds of faults I
>expect the other tests won't catch.  Then I run the tests and add new
>ones until I've satisfied branch, loop, multi-condition, and weak
>mutation coverage.  

Forgive my ignorance, but what is "weak mutation coverage"?  Also, as long
as I'm asking basic questions, what is "regression testing"?  I've seen the
term in a few places, but I've never seen it defined, and a subject search
in the library turned up nothing.

>Brian Marick
>Motorola @ University of Illinois
>marick@cs.uiuc.edu, uiucdcs!marick

Thanks in advance.

Dave Schaumann		|  And then -- what then?  Then, future...
dave@cs.arizona.edu	|  		-Weather Report

dwwx@cbnewsk.ATT.COM (david.w.wood) (02/06/91)

In article <15469.9102041851@olympus.cs.hull.ac.uk> rst@cs.hull.ac.uk (Rob Turner) writes:

   I believe that with software as it is currently written, code
				--------------------------
   inspections are by far the best way of removing bugs. A few competent
   programmers around a table discussing a piece of code will quickly
   iron out any faults.

   However, code inspections should not really be necessary. If you have
   *designed* your system properly, and have a collection of relatively
   small modules with well defined interfaces, then coding these modules
   should be a particularly straightforward task with little (though
   admittedly some) scope for error.

   My point is that inspections should be performed at the *design*
   stage, before any coding has been carried out (even before the
   implementation language has been chosen). Of course, the end product
   of the design should be concrete enough for any programmer to be able
   to produce the requisite code from with little thought. Design
   solutions which are at too high a level and leave a great deal to the
   imagination of the programmer are no good (even though the programmer
   may feel he/she has more room to express his/her talents).


HERE! HERE! Inspections have to be performed starting with requirements
and go all the way through the process. Only doing code inspections
allows you to miss the expensive errors. Fagin code inspections
have the code inspected against/with respect to the design.
How do you know the design is correct? The design needs to be inspected.
It has to be inspected against a higher level design, [iterate up to
requirements]. Test plans are generated from requirements, architectures
and designs, they need to be inspected.

I see dozens of published articles (and dozens of postings :-) mentioning
CODE inspections, BUT NOBODY (except Rob) mentions requirements, designs,
test plans, documentation, etc.

Is it so obvious that nobody thinks it needs to be stated?

What gives?

David Wood
AT&T Bell Labs

marick@cs.uiuc.edu (Brian Marick) (02/06/91)

dwwx@cbnewsk.ATT.COM (david.w.wood) writes:
>Is it so obvious that nobody thinks it needs to be stated?

I hope so.  Inspections (of some sort) are, as you say, more important
for upstream products than for code, both because you find important
errors sooner, and also because there are precious few alternatives.

There are at least two, though:

1.  It's been my experience (and other people's, too) that writing
test cases tends to find errors in the specification/design.  I think
it's because test cases are "concrete" -- you must specify exact
inputs and outputs, with no vagueness about what actually happens.
It's then that you realize that *that* module's output can't possibly
fit with *this* module's input.

2.  I've found that writing the user's manual also tends to flush out
errors in a specification.  I think this is because it makes you
approach the specification from a different perspective -- a good
user's manual explains the "why" of the specification.  It has to,
because it must give the reader an understanding of the fundamentals;
without that, he or she won't be able to extrapolate from your
examples.  Sometimes, you discover that the "why" doesn't make sense.
(Note:  I've had good results doing this, but others have not.)

Because of this, I like to write the test cases and user documentation
well before I write the code.

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

cl@lgc.com (Cameron Laird) (02/06/91)

In article <1991Feb6.143922.20539@cs.uiuc.edu> marick@cs.uiuc.edu (Brian Marick) writes:
			.
			.
			.
>Because of this, I like to write the test cases and user documentation
>well before I write the code.
			.
			.
			.
I insist on this, whenever I can.  For me, the obligatory order is
1.  user's manual;
2.  test suite;
3.  implementation.
In this submission, I'll only argue by assertion:  my experience
is that scheduling work in this sequence contributes mightily to
satisfaction, quality, timeliness, ...
--

Cameron Laird		USA 713-579-4613
cl@lgc.com		USA 713-996-8546 

john@newave.UUCP (John A. Weeks III) (02/09/91)

In article <1991Feb6.143922.20539@cs.uiuc.edu> marick@cs.uiuc.edu (Brian Marick) writes:

> 2.  I've found that writing the user's manual also tends to flush out
> errors in a specification.  I think this is because it makes you
> approach the specification from a different perspective -- a good
> user's manual explains the "why" of the specification.

> Sometimes, you discover that the "why" doesn't make sense.

I have noticed this when writing documentation for my programs.  My
rule of thumb is if I have to explain something, I must have approached
the problem wrong.  I have come across features that sounded so hokey
when I wrote the manual that I had to go back and "fix" the program.

BTW, most of these cases occured when I was trying to commercialize
utility programs that just "grew", ie, programs that had features 
added as they were needed rather than programs that were developed
from a spec.

-john-

-- 
===============================================================================
John A. Weeks III               (612) 942-6969               john@newave.mn.org
NeWave Communications                 ...uunet!rosevax!tcnet!wd0gol!newave!john
===============================================================================

alan@tivoli.UUCP (Alan R. Weiss) (02/12/91)

In article <3108.27aeae3b@iccgcc.decnet.ab.com> kambic@iccgcc.decnet.ab.com writes:
>In article <349@tivoli.UUCP>, alan@tivoli.UUCP (Alan R. Weiss) writes:
>> In article <29653@mimsy.umd.edu> cml@tove.cs.umd.edu (Christopher Lott) writes:
>>>In article <40530@genrad.UUCP>, mas@genrad.com (Mark A. Swanson) writes:
>[...]
>> I absolutely agree.  Still, as people get better at inspections,
>> the name of the game really IS getting those defect Finds during
>> inspections way up.  The trick is to not make it personal, but acknowledge
>> that software engineering is iterative and social in nature (see Weinberg,
>> Boehm, et. al).
>> 
>[...]
>There seems to have been little discussion in this thread on the training 
>required for doing inspections.  Creating the atmosphere for, and then 
>running proper inspections takes time, money, training, schedule impact,
>and potentially changes in either the psychology and/or staff of  the 
>organization.  Are they valuable?  Absolutely.  Are they free?  Not initially.
>The resultant value for some organizations has been measured as is part of the 
>literature.  IMHO most organizations have to go through some type of 
>justification of the value, and should continue to measure the value in
>terms of $ and time.
>
>GXKambic
>Allen-Bradley
>Standard disclaimers.


Inspections DO take training.  Distinguished from "code walkthroughs"
and "code reviews", inspection methodology can be (and SHOULD be)
applied to all deliverables, especially specifications.

We have found that training takes about a day, with reinforcement
during actual inspections by the Lead Moderator.  We learn by doing.
We decided to absorb the day's "cost" because of the high return on
investment.  The schedule "impact" is all positive, IMHO.  We believe
that inspections bring to light design issues faster than by using (endless,
unstructured) meetings by forcing people to do their homework *before*
the inspection meeting.   There undoubtedly is more up-front time devoted
to specification generation, but we're firmly convinced that this will yield
a MUCH cleaner and MUCH faster back-end (Integration, Functional, and System
Test/Beta Test) process resulting in schedule AND design achievement.



	.-------------------------------------------------------.
	|  Alan R. Weiss                                        |
	|  Manager, QA and Mfg.  _______________________________|
	|  Tivoli Systems, Inc.	| These thoughts are yours for  |
	|  Austin, Texas, US	| the taking, being generated   |
	|  512-794-9070		| by a program that has failed  |
	|  alan@tivoli.com	| the Turing Test. *value!=null;|
	|_______________________________________________________|
	|#include "std.disclaimer" --- Your mileage may vary!   |
	.-------------------------------------------------------.

"Quality is never an accident.  It is always the result of high intention,
sincere effort, intelligent direction, and skillful execution.  It represents
the wise choice of many alternatives."
 

alan@tivoli.UUCP (Alan R. Weiss) (02/12/91)

In article <7410@exodus.Eng.Sun.COM> donm@margot.Eng.Sun.COM (Don Miller) writes

>   There has also been no discussion on how to make most effective
>   use of the time and personnel resources allocated to code reviews.
>   Is everyone really still just giving out hard copy of source,
>   hoping reviewers can figure it out, and then getting together
>   to hammer it out?

Gilb's Inspections are NOT code reviews.  Tom Gilb's practices embody the
essencfe of good meeing management.  As both a technical person in SQA
AND a professional manager, I appreciate this.  So do our developers.


>   I envision a code review process which makes use of tools and
>   practices designed to minimize resource consumption.

Does this mean, "not wasting a lot of time?"

>  Static 
>   analysis tools would be valuable towards understanding foreign
>   code.  An on-line reviewer which cataloged comments of the
>   reviewer would facilitate pre-meeting information gathering.
>   A projection system accessing the actual code could be used
>   as a guide during the review.

Good ideas.  We use electronic mail extensively for pre-Defect Logging Meeting
and post-meeting discussions.

Since we're not actually inspecting code yet, I really like your idea of
an overhead projector.  However, as I said this is not a code walkthrough
or a code review.  Peole are expected to have read and commented on the
code beforehand.  The overhead would be used for clarification purposes
only.

>
>   I'm proposing automation, or even just optimal manual techniques,
>   as a way of addressing the primary concern regarding code reviews.
>   Some of the tools that I've mentioned above exist and the others
>   aren't difficult to imagine.  Does anyone out there use these
>   techniques or are we still in the stone age?
>
>--
>Don Miller                              |   #include <std.disclaimer>
>Software Quality Engineering            |   #define flame_retardent \
>Sun Microsystems, Inc.                  |   "I know you are but what am I?"
>donm@eng.sun.com                        |   

We software engineers are too often prone to suggesting technological
solutions for what is really a business organization problem:  not
wasting time, defining people's jobs in terms of a total engineering
perspective, getting people to Do The Right Thing, etc.  I would suggest,
IMHO, that none of these things can be solved by technology, only by
good management (including good self-management :-)



	.-------------------------------------------------------.
	|  Alan R. Weiss                                        |
	|  Manager, QA and Mfg.  _______________________________|
	|  Tivoli Systems, Inc.	| These thoughts are yours for  |
	|  Austin, Texas, US	| the taking, being generated   |
	|  512-794-9070		| by a program that has failed  |
	|  alan@tivoli.com	| the Turing Test. *value!=null;|
	|_______________________________________________________|
	|#include "std.disclaimer" --- Your mileage may vary!   |
	.-------------------------------------------------------.
 

"Quality is never an accident.  It is always the result of high intention,
sincere effort, intelligent direction, and skillful execution.  It represents
the wise choice of many alternatives."
 

alan@tivoli.UUCP (Alan R. Weiss) (02/12/91)

In article <795@caslon.cs.arizona.edu> dave@cs.arizona.edu (Dave P. Schaumann) writes:
>In article <1991Feb1.151945.22863@cs.uiuc.edu> marick@cs.uiuc.edu (Brian Marick) writes:
>>[...]
>>I apply the inspection checklist to look for those kinds of faults I
>>expect the other tests won't catch.  Then I run the tests and add new
>>ones until I've satisfied branch, loop, multi-condition, and weak
>>mutation coverage.  
>
>Forgive my ignorance, but what is "weak mutation coverage"?  Also, as long
>as I'm asking basic questions, what is "regression testing"?  I've seen the
>term in a few places, but I've never seen it defined, and a subject search
>in the library turned up nothing.
>
>>Brian Marick
>>Motorola @ University of Illinois
>>marick@cs.uiuc.edu, uiucdcs!marick
>
>Thanks in advance.
>
>Dave Schaumann		|  And then -- what then?  Then, future...
>dave@cs.arizona.edu	|  		-Weather Report


Lemme give you the practical definition of "regression testing":
testing designed to ensure that a previously found bug has been
fixed, AND to ensure that in the correcting of the problem no new
bugs have been introduced into the baseline.

In further detail, once a bug is found a test case is identified,
or created if the defect was found during a user-level and/or ad hoc
test.  This test case is then run again AFTER the bug has been fixed
in the next drop to test.  A collection of such test cases forms
a Regression Test Bucket, or RTB (IBM terminology ... be creative here).

Never heard of "weak mutation coverage."  Anyone else?




	.-------------------------------------------------------.
	|  Alan R. Weiss                                        |
	|  Manager, QA and Mfg.  _______________________________|
	|  Tivoli Systems, Inc.	| These thoughts are yours for  |
	|  Austin, Texas, US	| the taking, being generated   |
	|  512-794-9070		| by a program that has failed  |
	|  alan@tivoli.com	| the Turing Test. *value!=null;|
	|_______________________________________________________|
	|#include "std.disclaimer" --- Your mileage may vary!   |
	.-------------------------------------------------------.
 


"Quality is never an accident.  It is always the result of high intention,
sincere effort, intelligent direction, and skillful execution.  It represents
the wise choice of many alternatives."
 

jls@yoda.Rational.COM (Jim Showalter) (02/15/91)

Mutation coverage is the idea of deliberately ADDING bugs to code to see
how well your tests catch the bugs. That's about all I know about it because
that's about all I CARE about it.

gbeary@cherokee.uswest.com (Greg Beary) (06/19/91)

Does anyone know of any commercially available Code Inspection
Methodologies. IE. one that defines a process, provides education,
and provides supports (tools, doc., etc.) for doing Code Inspection. 

There's lots of CASE vendors that provide services/tools for 
SA/SD, does anyone do something similiar for Code Inspection?
Any pointers would be welcomed. 

Thanks,
Greg


--
Greg Beary 				|  phone:	(303)541-6561
US West Advanced Technologies  		|  email:	gbeary@uswest.com
4001 Discovery Drive       		|  fax:		(303)541-6441
Boulder,   CO  80303			|

alesha@auto-trol.com (Alec Sharp) (06/20/91)

In article <1991Jun19.163846.7716@cherokee.uswest.com> gbeary@cherokee.uswest.com (Greg Beary) writes:
>Does anyone know of any commercially available Code Inspection
>Methodologies. IE. one that defines a process, provides education,
>and provides supports (tools, doc., etc.) for doing Code Inspection. 
>
>There's lots of CASE vendors that provide services/tools for 
>SA/SD, does anyone do something similiar for Code Inspection?
>Any pointers would be welcomed. 
>

The best review process I know of is that described by Freedman and
Weinberg in "Handbook of Inspections, Walkthroughs, and Technical
Reviews", published by Dorset House.  They are also available for
consulting/training on the process.  The book costs $45 and is worth
every penny.

We also require that all new C code lint cleanly before being
reviewed.  We're defining corporate coding standards, and are looking
into CodeCheck from Abraxas Software as a way to ensure that the
standards are met before reviewing the code.

I know that AT&T Bell Labs has a rigorous code review scheme - if you
know anyone there you might try to find out what they do.

alec...



-- 
-----Any resemblance to the views of Auto-trol is purely coincidental-----
alesha@auto-trol.com
Alec Sharp           Auto-trol Technology Corporation
(303) 252-2229       12500 North Washington Street, Denver, CO 80241-2404