[comp.software-eng] code reviews

tada@athena.mit.edu (Michael Zehr) (06/16/89)

At the request of one of our clients, my company is about to have it's
first code review.  I've read a fair amount about the benefits of code
reviews, but very little on methods or procedures.  

Among the 30 other programmers here, only 2 (that i know of) have ever
been invovled in a code review.  From talking with them, i gather that
there are typically two kinds of reviews -- formal design/functionality
reviews, and informal peer reviews, covering implementation details.  

The review requested by the client is supposed to be a formal algorithm
and functionality review.  We have a fairly good idea how to handle that
-- the client is really only interested in making sure someone else
(mainly the project manager) has looked at my program and verified that
it's correct (above and beyond normal testing that would be done).  

Since part of my job here is to improve programmer productivity, i'm
interested in eventually implementing some sort of procedure for
periodic peer reviews.  Does anyone have suggestions or
reccommendations?

thanks for any advice,
michael j zehr

stein@pixelpump.osc.edu (Rick 'Transputer' Stein) (06/16/89)

In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
>At the request of one of our clients, my company is about to have it's
>first code review.  I've read a fair amount about the benefits of code
>reviews, but very little on methods or procedures.  
First question:
Are you in a DoD or ANSI environment?  If you are DoD (2167), and you've
never experienced a Critcial Design Review, are you in for an _EXPERIENCE_.!
If its ANSI, well, its only 80% of DoD -:).
>
>Among the 30 other programmers here, only 2 (that i know of) have ever
>been invovled in a code review.  From talking with them, i gather that
>there are typically two kinds of reviews -- formal design/functionality
>reviews, and informal peer reviews, covering implementation details.  
Actually, this is really a function of corporate policy.  Meaning that
the presentation materials needed for each kind of formal review are
usually stipulated by the program director, etc.  Basically your are right.
System requirements reviews, preliminary design review, critical design
review are usually the primary customer interfaces.  Code walk throughs
are more generally used one on one with a supervisor and a staff member
to examine what the software physically looks like.  A critique of the
code is conducted (like compute the constant up above and then add it 
to speed the loop, mostly cosmetic stuff).
>
>The review requested by the client is supposed to be a formal algorithm
>and functionality review.  We have a fairly good idea how to handle that
>-- the client is really only interested in making sure someone else
>(mainly the project manager) has looked at my program and verified that
>it's correct (above and beyond normal testing that would be done).  
Sounds like a Critical Design Review to me.  This is the phase in the
SWLC that occurs just before coding begins.  Usually you present DFDs
HIPOs, Flowcharts, and other design data.  Sounds like you've already
had a code walk through, so you're clearly beyond the CDR in the SWLC.
You might want to show the unit development folders.
>
>Since part of my job here is to improve programmer productivity, i'm
>interested in eventually implementing some sort of procedure for
>periodic peer reviews.  Does anyone have suggestions or
>reccommendations?
If you have a style guide specifying the formalism to use when coding
software, this helps quite a bit.  That way you have a standard reference
and rule book to use as a judge and coach.
>
>thanks for any advice,
>michael j zehr
No problem.  Hope it helps.

-=-
Richard M. Stein (aka Rick 'Transputer' Stein)
Concurrent Software Specialist @ The Ohio Supercomputer Center
Ghettoblaster vacuum cleaner architect and Trollius semi-guru
Internet: stein@pixelpump.osc.edu, Ma Bell Net: 614-292-4122

rar@KRONOS.ADS.COM (Bob Riemenschneider) (06/17/89)

There's a very useful account of the why's and how's of code review in

	_Handbook of Walkthroughs, Inspections, and Technical Reviews_
	Daniel P. Freedman and Gerald M. Weinberg
	Little, Brown (ISBN 0-316-292826)

Everything you want to know about various alternative approaches, and then 
some.  As far as I know, the third edition (1982) is the most recent.

							-- rar

johnk@opel.UUCP (John Kennedy) (06/19/89)

In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
[...]
>
>Since part of my job here is to improve programmer productivity, i'm
>interested in eventually implementing some sort of procedure for
>periodic peer reviews.  Does anyone have suggestions or
>reccommendations?
>
>michael j zehr


I can't help but chuckle when I see code reviews and productivity in the
same paragraph.  It's good for the project, it's what the customer wants,
but be assured that it's an increase in overhead and a decrease in productivity.

I wish this weren't true.

John

meuer@klein (Mark Meuer) (06/19/89)

In article <116@opel.UUCP> johnk@opel.UUCP (John Kennedy) writes:
>
>I can't help but chuckle when I see code reviews and productivity in the
>same paragraph.  It's good for the project, it's what the customer wants,
>but be assured that it's an increase in overhead and a decrease in productivity.
>
>I wish this weren't true.
>
>John

I beg to differ.  I used to work for a software company where peer
reviews were used extensively.  They were very helpful in a number of
respects, and were well worth the effort.

It is true that reviews add overhead to the development, but that time
is made up for by better designs and much better communication and
understanding between members of the programming team.

-mark

duncan@dduck.ctt.bellcore.com (Scott Duncan) (06/19/89)

In article <116@opel.UUCP> johnk@opel.UUCP (John Kennedy) writes:
>
>I can't help but chuckle when I see code reviews and productivity in the
>same paragraph.  It's good for the project, it's what the customer wants,
>but be assured that it's an increase in overhead and a decrease in productivity.

I suppose that in terms of getting the initial release of the system out of
implementation and into system testing (or out the door) that this is true.
However, the contention of code reviews is that they assist in the quality
(and reliability) of the code and that the code meets customer expectations
more adequately.

The experimental results and industry experience to which I have been exposed
(at a number of companies, large and small) suggests that such reviews (in-
cluding design reviews) tend to a more stable, less error-prone system over its
lifecycle.  Systems in the field also tend to cost many orders of magnitude
more to fix than to have had corrected in implementation.

People also point to the fact that such reviews are educational, provide more
visibility to the programming effort, and engender a sense of responsibility
for the system (not just an individual's piece of it) across the project.

It also seems true that people who have "customers" think about and are con-
cerned with quality issues while those who have "users" are more affected by
productivity.  Productivity, ultimately, is an internal concern while quality
is an external one.  As someone noted within the last week (and I paraphrase):
in a year, they won't remember how fast you delivered it, just how good it is.

And while there are definitely market considerations when it comes to releasing
a product, I'd say they have to be accounted for by factoring such reviews (and
other quality efforts) into the schedules.  I would say that in the short-term,
the claims of overhead and productivity "loss" could be substantiated.  But in
terms of satisfaction with the product, long-term cost, and productivity gains
in less maintenance, I'd have to say such reviews end up being worth it.

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (201-699-3910 (w)   201-463-3683 (h))

stein@pixelpump.osc.edu (Rick 'Transputer' Stein) (06/19/89)

In article <16925@bellcore.bellcore.com> duncan@ctt.bellcore.com (Scott Duncan) writes:
>And while there are definitely market considerations when it comes to releasing
>a product, I'd say they have to be accounted for by factoring such reviews (and
>other quality efforts) into the schedules.  I would say that in the short-term,
>the claims of overhead and productivity "loss" could be substantiated.  But in
>terms of satisfaction with the product, long-term cost, and productivity gains
>in less maintenance, I'd have to say such reviews end up being worth it.

I must agree.  Code walkthroughs, design reviews, acceptance testing, etc.
are all part of the software engineering process.  These elements provide
accountability, and this is a necessary part of any environment where
funds are spent developing a product.  The activities are _part_ of the
process, and while considered a nuisance by some, are invaluable tools
for management.
-=-
Richard M. Stein (aka Rick 'Transputer' Stein)
Concurrent Software Specialist @ The Ohio Supercomputer Center
Ghettoblaster vacuum cleaner architect and Trollius semi-guru
Internet: stein@pixelpump.osc.edu, Ma Bell Net: 614-292-4122

ofut@hubcap.clemson.edu (A. Jeff Offutt) (06/20/89)

From article <116@opel.UUCP>, by johnk@opel.UUCP (John Kennedy):
> In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
> [...]
>>
>>Since part of my job here is to improve programmer productivity, i'm
>>...
>>periodic peer reviews.
>>
>>michael j zehr


> I can't help but chuckle when I see code reviews and productivity in the
> same paragraph.  It's good for the project, it's what the customer wants,
> but be assured that it's an increase in overhead and a decrease in productivity.

Now be careful that when you say "increase in overhead and a decrease in
productivity" we are all counting the same apples.  It is true that
code reviews will take time and energy away from other tasks, but it can
also save time and energy later on.  Reviewing code can significantly reduce
the amount of time spent testing the software.  In fact, some (inconclusive)
studies have indicated that code reviews are a more cost-effective way of
finding errors in software than traditional testing (other studies have
shown the opposite).  Whether or not you believe the studies completely,
it is clear that reviewing code is a very effective way to find errors
in the program.

During a code review, the programmer can check that A) the software is
somewhat related to the requirements and B) the requirements are reasonably
close to what the customer wants and what the programmers can implement.
This can significantly reduce the amount of time and money spent on
maintaining the software.

In short, saying that code reviews decrease productivity is only true over
a very *small* part of the software lifecycle, when in fact, code reviews
can decrease cost and increase quality over the *entire* lifecycle.

(Of course, code reviews are also invariably tedious, boring and painful.)
-- 
Jeff Offutt
Department of CS, Clemson University, Clemson SC  (803) 656-5882
Internet:	ofut@hubcap.clemson.edu

paulc@microsoft.UUCP (Paul Canniff 2/1011) (06/20/89)

>In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
>>Since part of my job here is to improve programmer productivity, i'm
>>interested in eventually implementing some sort of procedure for
>>periodic peer reviews.  Does anyone have suggestions or
>>reccommendations?
>>
>>michael j zehr


In article <116@opel.UUCP> johnk@opel.UUCP (John Kennedy) writes:
>I can't help but chuckle when I see code reviews and productivity in the
>same paragraph.  It's good for the project, it's what the customer wants,
>but be assured that it's an increase in overhead and a decrease in productivity.
>
>I wish this weren't true.

I have the opposite opinion.  It is good for the project, it is good
for the team members, and it *does* increase overall productivity if done
correctly.  By this I mean it encourages folks to write code that they
are willing to expose to the world;  such code usually has fewer bugs,
and is better presented and easier to maintain.

If on the other hand you make an extreme case for finicky format details
and have curly-brace wars in the code review, folks will spend lots of
time formatting the source, making whitespace deltas, etc.

So, the review shakes out some bugs (fresh pair of eyes and all that),
plus it puts the fear of ridicule into the lazy programmer who write
unmaintainable code.  And, it gives programmers a chance to (1) see code
they may someday have to work on, while the original developer is there
to answer questions (and insert needed comments) and (2) learn from
each other.

The pitfalls:  too many reviewers, wrong reviewers, wrong criteria, no
follow-up on review comments, and *fear* of the review.  That last one
is a big problem;  no-one wants to be the first reviewed, especially if
the criteria and goals are not clearly spelled out.

mcp@sei.cmu.edu (Mark Paulk) (06/20/89)

In article <5801@hubcap.clemson.edu> ofut@hubcap.clemson.edu (A. Jeff Offutt) writes:
>From article <116@opel.UUCP>, by johnk@opel.UUCP (John Kennedy):
>> In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
>> [...]
>>>
>>>Since part of my job here is to improve programmer productivity, i'm
>>>...
>>>periodic peer reviews.
>>>
>>>michael j zehr
>
>
>> I can't help but chuckle when I see code reviews and productivity in the
>> same paragraph.  It's good for the project, it's what the customer wants,
>
>Now be careful that when you say "increase in overhead and a decrease in
>productivity" we are all counting the same apples.  It is true that
>code reviews will take time and energy away from other tasks, but it can
>also save time and energy later on.  Reviewing code can significantly reduce


The following references have some interesting data and insights on the
role and power of inspections.  Unfortunately it is difficult to get
hard data on the effectiveness of methods such as inspections, but there
has been some work published which is much more pertinent than my
opinion would be... Pertinent quotes follow each reference.

Let me also emphasize correct definition of terms!  Inspections, reviews,
walkthroughs, etc., are frequently overloaded terms.  For the "official"
definitions, see IEEE Standard 1028 (or Fagan's original paper for that
matter).  There are distinct differences between the concepts; the most
formal is inspection.  I use peer review as the all inclusive term
because it is not defined in 1028, but some people use it in the hierarchy
	desk check
	peer review
	walkthrough
	inspection
as meaning a 1-1 review with A peer (as opposed to a group of peers).

Review, by itself, covers a wide range of sins.  Much of this discussion
has been oriented at technical reviews, i.e., CDR, PDR class reviews, 
which are NOT what the original poster was asking about.

@b([ACKE89]) A.F. Ackerman, L.S. Buchwald, and F.H. Lewski,
"Software Inspections: An Effective Verification Process,"
IEEE Software, Vol. 6, No. 3, May, 1989, pp. 31-36.
 Software inspections have been found to be superior to reviews
and walkthroughs, p. 31
 collection and analysis of data for immediate and long-term
process improvement, p. 32
 inspections improve quality and productivity, p. 34
 inspections give project management more definite and more dependable
milestones than less formal review processes, p. 35

@b([GILB88]) T. Gilb,
PRINCIPLES OF SOFTWARE ENGINEERING MANAGEMENT,
Addison-Wesley, Reading, MA, 1988.
 The inspection method is the most effective quality control method for
software specification documentation we know about.  p. 68
 Testing is a maximum of 50 to 55% effective at defect identification and
removal for a single test process.  p. 221
 Inspection is about 80% (+/- 20%) effective in removing existing defects.
p. 221
 The average is five hours saved for every hour invested in inspection. 
p. 221
 Inspected programs were ten times cheaper to maintain than ... similar
non-inspected programs.  p. 221
 High level inspections (requirements and design specification) were the
most powerful things they could do.  p. 244

@b([GRAD87]) R.B. Grady and D.L. Caswell,
SOFTWARE METRICS: ESTABLISHING A COMPANY-WIDE PROGRAM,
Prentice-Hall, Englewood Cliffs, NJ, 1987.
 The five "modern programming practices" which had the strongest correlation
with productivity were top-down design, modular design, design reviews, code
inspections, and quality assurance programs.  p. 20
 Projects of highest productivity are among those with the lowest defect
densities.  p. 140

@b([MYER88]) W. Myers,
"Shuttle code achieves very low error rate,"
IEEE Software, Vol. 5, No. 5, September, 1988, pp. 93-95.
 500K Space Shuttle source code with 0.11 errors/KSLOC credited by IBM/FSD
to process definition, rigorous inspection of work products across the
process, independent software verification, defect-cause analysis, knowledge
engineering, expert systems, specialized tools, and "good old value gained
from lessons learned." (Barbara Kolkhorst)  Reduced from 2 errors/KSLOC,
versus 8-10 errors/KSLOC for industry.
 Effort/time spent on software reconfiguration reduced by 50%.
 About 85% of major errors discovered in inspection.
--
Mark C. Paulk 
    mcp@sei.cmu.edu
      "The essence of true adulthood is deferred gratification."

crm@romeo.cs.duke.edu (Charlie Martin) (06/21/89)

In article <116@opel.UUCP> johnk@opel.UUCP (John Kennedy) writes:
>In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
>[...]
>>
>>Since part of my job here is to improve programmer productivity, i'm
>>interested in eventually implementing some sort of procedure for
>>periodic peer reviews.  Does anyone have suggestions or
>>reccommendations?
>>
>>michael j zehr
>
>
>I can't help but chuckle when I see code reviews and productivity in the
>same paragraph.  It's good for the project, it's what the customer wants,
>but be assured that it's an increase in overhead and a decrease in productivity.
>
>I wish this weren't true.
>
>John

The way I've heard it is that the use of formal reviews has the effect
of lowering lifecycle cost and cost considered through delivery.  It
does have the effect of increasing overhead cost *during* design and
coding, but most methodologies have that effect.

The point here is that coding cost is only about 15% of total cost in
waterfall methodologies (harder to evaluate in, say, incremental
development) and it's worth while to increase coding cost by 25% (so
overall increase is 5%) to gain 10-20 % over all.

As usual, I'm not typing this where I can lay hands on the references,
but several studies have been done that show of all methods studied
that can be applied to software development, reviews have the greatest
impact on cost reduction.

Charlie Martin (crm@cs.duke.edu,mcnc!duke!crm) 

perry@apollo.COM (Jim Perry) (06/21/89)

In article <13716@umn-cs.CS.UMN.EDU> meuer@klein.UUCP (Mark Meuer) writes:
>In article <116@opel.UUCP> johnk@opel.UUCP (John Kennedy) writes:
>>
>>I can't help but chuckle when I see code reviews and productivity in the
>>same paragraph.  It's good for the project, it's what the customer wants,
>>but be assured that it's an increase in overhead and a decrease in productivity.
>>
>>I wish this weren't true.
>>
>>John
>
>I beg to differ.  I used to work for a software company where peer
>reviews were used extensively.  They were very helpful in a number of
>respects, and were well worth the effort.
>
>It is true that reviews add overhead to the development, but that time
>is made up for by better designs and much better communication and
>understanding between members of the programming team.
>
>-mark

I'll second that.  

Many people seem to think of reviews as either a very formal process where
you're going through a lot of unnecessary formality to satisfy some external
requirement, or as an antagonistic process with a manager poring over your
code like a hostile grader.  Clearly neither of these situations are likely
to yield a good software engineering environment, which is not to say that
there aren't sites out there run that way.  

My experience of peer reviews is just that: review of code (or a design or
whatever) by one or more peers -- engineers whose ability and insights you 
respect similarly to your own.  Just as with writing English, one becomes
blind to certain aspects of one's own writing over time, and having a fresh
eye look over it can turn up flaws, inconsistencies, etc.  It's not an 
antagonistic process, and you don't expect to turn up lots of problems, but
you'd be surprised at the number of things you *can* turn up this way,
even in the best code.

A very real additional benefit of such an environment: where code is
written with the expectation that others will be reading it, the tendency
is toward code that is more readable, more understandable, and better
documented than otherwise, since it must be understood by a reader who
is generally not as familiar with the code as the writer.  Since each
writer is also a reader, this is to everyone's benefit.  Of course all
of those aspects are in direct support of future maintainance of such code
at very little incremental cost.

While I think there's more room for formality in reviewing of design specs
and suchlike, code reviews can be quite informal, for instance just having
a colleague look over the source/listing of a module, and diffs from the
previous release where appropriate, as part of the release/freeze process.
It helps if there's a librarian/release coordinator to coordinate the 
process of generating listings/diffs and allocating reviewers, but even
an informal process ("hey, Joe, want to take a look at this?") is better
than never having anyone look at your code.  It generally doesn't take 
more than a few minutes to review most coding updates, and even reading a
completely new module at this level should take under an hour.  (Remember,
you trust the writer, and assume it works, you're just double-checking, not
grading.)  That's pretty minimal overhead, for a pretty big payback.
-- 
Jim Perry   perry@apollo.com    HP/Apollo, Chelmsford MA
This particularly rapid unintelligible patter 
isn't generally heard and if it is it doesn't matter.

steinar@fdmetd.uucp (Steinar Overbeck Cook) (06/21/89)

 In article <12047@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes:
 [...]
 >
 >Since part of my job here is to improve programmer productivity, i'm
 >interested in eventually implementing some sort of procedure for
 >periodic peer reviews.  Does anyone have suggestions or
 >reccommendations?
 >
 >michael j zehr

In our comapny we use Arthur Andersen's Method/1 as our system
development methode. Our experience is that it is MUCH more expensive to
find an error in the late phases of a project, than to find it in the
requirement or design phase.

Actually the cost of changes in the software development life cycle are
signifcantly (10:1 to 100:1) or even more !! This, of course, depending
upon the size of your project/system.

This is described in: Bohem, B.W., Software Engineering Economics, 1981.


-- 
Steinar Overbeck Cook, Fellesdata a.s, P.O. Box 248, 0212 OSLO 2, NORWAY
Phone : +47 2 52 80 80                            Fax   : +47 2 52 85 10
E-mail : ...!mcvax!ndosl!fdmetd!steinar  or       steinar@fdmetd.uucp
<The opinions expressed, if any, do not represent Fellesdata a.s>

leichter@CS.YALE.EDU (Jerry Leichter) (06/22/89)

Many years ago, in wilder times, I and two friends spent a fair amount of
time, ahem, breaking into our college computer.  We generally worked on hacks
cooperatively.

One day, one of us developed a very simple program.  He decided that it was
simple enough, and so clearly correct, that he could just try it out.

Well, needless to say, the program had a completely obvious bug which one of
us spotted instantly on looking at the code.  The effect of the bug was to
cause a system crash leaving fairly obvious pointers to the culprit.  The
author had run the thing twice...the first crash "must have been a coinci-
dence".  (Back in those days, the average time between crashes was a couple
of hours, so this was not a completely absurd thought.)  We were VERY lucky
not to get caught.

The rule we learned from this was:  Never hack alone.  From there on out, none
of us would run a new hack program unless one of the others had checked it out
first.  We had no further incidents of this particular sort.

Years later, I worked for an organization with a policy of code reviews.  We
actually had multiple levels of review.  There were informal design reviews.
In practice, most people hated those and they were not very useful - they
tended to bring out the worst in those people who insisted on doing things in
particular standard ways which were usually irrelevant.

On the other hand, we also had a very strict CODE review policy - we called it
"releasing".  Any particular implementation project, whether new code or a
patch to existing code, required two people, a coder and a releaser.  The
coder wrote and tested the code, and handed it to the releaser.  The releaser
was responsible for checking that the code met its specs, was written to our
internal standards (which were not very strict; what they required was all
very simple and reasonable), was reasonably documented, and so on.  More gene-
rally, the releaser was expected to understand the code as well as the coder
did.  If he found problems, he could at his disgression fix them himself or
require the coder to fix them.

One very important element of this system was that there were no separate
groups of coders and releasers - everyone did both jobs.  If you released my
code today, I might release yours tomorrow.  This avoided the typical con-
flicts you often see between development and QA groups.  (Yes, it can have
disadvantages, too.)  It also provided some check to ensure that people didn't
try to cut corners on either job - though of course that happened.  (One per-
son who worked for me for a while viewed his releaser's role as one of finding
and fixing all the bugs.  He was known to hand over code that wouldn't com-
pile.  Needless to say, he was seen as a problem....)  However, in my experi-
ence the system worked quite well, and the code quality that resulted was
quite high.  (Sorry, I can't quantify any of that...but I'd certainly use such
a system again.)

What did doing this cost?  It's hard to be exact because people varied.  A
good manager had to adjust for the individuals involved.  For most people, I'd
estimate release time at about half of pure coding time - where pure coding
time did NOT include design or debugging.  For the "problem" person I mention-
ed above, I'd have to drop coding time and increase debugging and (even more
so) release time; the total would end up somewhat higher.  But that rarely
happened.  Since except for small, isolated projects pure coding time wasn't
likely to be more than a third or so of the total time, release time might
add about 15% to the total - and it could often be used to fill in otherwise
dead spots on schedules, so in real terms its contribution to total completion
time might be quite small.  On the other hand, it also provided you with a
second expert on the particular piece of code involved.  The training to do
that would have cost you some time, too.

The group in which I saw this done was not very large (50 people perhaps in
total) and was probably unusually talented, since management made it a point
to hire the brightest people they could, regardless of experience.  How well
this kind of approach would generalize to other groups, I can't really say.
(I also believe that the group eventually stopped doing this when pressure
from above to get stuff done faster became much greater.  There's never time
to do it right, but always time to do it again....)

							-- Jerry

tmcclory@emdeng.Dayton.NCR.COM (Thomas.J.Tom.McClory) (06/22/89)

In article <8906161716.AA16121@kronos.ADS.COM> rar@KRONOS.ADS.COM (Bob Riemenschneider) writes:
>There's a very useful account of the why's and how's of code review in
>
>	_Handbook of Walkthroughs, Inspections, and Technical Reviews_
>	Daniel P. Freedman and Gerald M. Weinberg
>	Little, Brown (ISBN 0-316-292826)
>
>Everything you want to know about various alternative approaches, and then 
>some.  As far as I know, the third edition (1982) is the most recent.
>
>							-- rar

I will throw in references for the "Design & Code Inspection Process", 
a specific variation of the general peer review methods.

M. E. Fagan, "Design and Code Inspections to Reduce Errors in
Program Development," _IBM Systems Journal_, Vol. 15, No. 3, 1976,
pp. 182-211.
	Combines both a "how to" of the process as
	with the results of using the process 
	several IBM products.

M. E. Fagan, "Advances in Inspection,", _IEEE Transactions on 
Software Engineering_, July 1986, pp. 744-751.
	A follow-on work to the paper above that refines
	the process description, and documents additional
	successes using the process.

J. H. Dobbins, "Inspections as an Up-Front Quality Technique," in the 
_Handbook of Software Quality Assurance_, G. G. Schulmeyer and J.
I. McManus, eds., Van Nostrand Reinhold, New York, 1987, pp. 137-177.
	A case study of inspections as used by the IBM 
	Federal Systems Division.  Describes how the process
	was implemented and the process measurements that
	the Q.A. staff implemented.  Describes how statistical
	process control can be applied to software development.

A. Frank Ackerman, L. S. Buchwald, and F. H. Lewski, "Software
Inspections: An Effective Verification Process," _IEEE Software_,
May, 1989, pp. 31-36.
	A good overview description.  The two Fagan papers are
	better how-to descriptions, but this is a good
	place to start if you don't know about inspections.

Hope this helps,

-- 
Thomas J. McClory			phone: (513)445-6819
Engineering & Manufacturing-Dayton	email: tmcclory@emdeng.NCR.COM
NCR Corporation		EMD/3
Dayton, OH

collin@hpindda.HP.COM (Collin Park) (06/22/89)

-- i'm trying to resist the temptation to enter the debate of whether
   code reviews are Good Things or not...

Umm, one thing that I seems to be in the literature that is often
under-emphasized (in my personal experience) is the follow-up meeting.
But I'm getting ahead of myself here.  When a defect (de marco has
taught me not to say "bug") is found during a review, there are
(at least?) 2 approaches that can be taken:

1. Discuss general ways to fix the defect -- consider various approaches,
   pros & cons, etc.  This has the advantage of being more educational
   to the reviewee but has the risk of causing the entire meeting to
   go down a "rat hole."  A skilled facilitator is essential to prevent
   this.  (Actually, a skilled facilitator is essential regardless of
   whether you do [1] or [2].)

2. Once the defect is identified, cut off discussion and leave it to the
   engineer to fix it offline.

Whether you do [1] or [2], though, a follow-up meeting is very important
("essential"?) to verify that the defects did in fact get fixed without
introducing new ones.  


------------------------------------------------------------------------------
Unless explicitly stated otherwise, opinions expressed above are my
own and do not explicitly reflect those of the Hewlett-Packard Company
or any other person or entity.

collin park			Hewlett-Packard Company
collin%hpda@hplabs.hp.com	19420 Homestead Road -- MS 43LT
				Cupertino, CA  95014  USA

runyan@hpirs.HP.COM (Mark Runyan) (06/26/89)

>/ stein@pixelpump.osc.edu (Rick 'Transputer' Stein) /  7:53 am  Jun 19, 1989 /
>...  Code walkthroughs, design reviews, acceptance testing, etc.
>are all part of the software engineering process.  These elements provide
>accountability, and this is a necessary part of any environment where
>funds are spent developing a product.  The activities are _part_ of the
>process, and while considered a nuisance by some, are invaluable tools
>for management.

My posting probably isn't going to sound like it, but I am a believer in
code walkthroughs, etc.  However, it surprises me how little this is done
in the industry (or, at least, at companies that I've been in).  Sometimes
the project is so uncomplicated that code reviews are considered
unnecessary, while at other times, managers and engineers resist code
reviews because of the fear of an "elitist" attitude (in this case, it
isn't a peer review, but an owner review, i.e. "you" can't put it into "my"
code unless "you" check with "me" first).  There are right ways and wrong
ways to implement design and code reviews, and the wrong way is usually
taken when the the implementer doesn't understand what the review is suppose
to do, but is instead implementing some cookbook method from some software
engineering course.

Mark Runyan