[comp.software-eng] industrial engineering and metrics

marick@m.cs.uiuc.edu (Brian Marick) (05/16/91)

jls@netcom.COM (Jim Showalter) writes:

>I find it odd that metrics have been
>successfully used in just about all other industries to improve quality,
>reduce risk, identify incipient problems, increase productivity, etc, but

Let's be careful.  For example, consider industrial engineering, where
people used to measure "cost of quality".  The cost of quality was
calculated by adding together two curves.  The first curve was the
cost of shipping bad product, which decreases with increasing
inspection.  The second is the cost of inspection itself, which
obviously increases.  Businesses strived to position themselves at the
minimum, the "acceptable quality level".

This turns out to be a bad idea.  The root problem is that it's very
hard to measure the true cost of shipping bad product.  For example,
how do you measure low-level annoyance with rattling parts --
something that doesn't result in a warrantee charge, but may mean the
customer buys another brand next time?  This incomplete data then led
directly to mistaken strategies.

The modern approach, following Taguchi, Deming, and company, has (in
principle) abandoned a measurement (cost of quality) and replaced it
with a system of faith that asserts that increased quality is *always*
cost-effective.  In this case, the faith has worked better than the
metric.

I'd guess that a sizable percentage of the anti-metric camp is
justifiably fearful of the effects of measuring (and, inevitably,
concentrating on) the inessentials.  Another sizable percentage is
spoiled rotten.

Disclaimer:  I'm not an industrial engineer.  What I know, I know from
taking industrial engineering courses, reading, and being an employee
of a company that's been quite successfully fanatical about quality.

Further disclaimer:  And, of course, even I realize my capsule summary
of industrial engineering is over-simplified.

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

orville@weyrich.UUCP (Orville R. Weyrich) (05/16/91)

In article <1991May15.223135.12381@m.cs.uiuc.edu> marick@m.cs.uiuc.edu (Brian Marick) writes:
>jls@netcom.COM (Jim Showalter) writes:
>
>>I find it odd that metrics have been
>>successfully used in just about all other industries to improve quality,
>>reduce risk, identify incipient problems, increase productivity, etc, but
>
>Let's be careful.  For example, consider industrial engineering, where
>people used to measure "cost of quality".  The cost of quality was
>calculated by adding together two curves.  The first curve was the
>cost of shipping bad product, which decreases with increasing
>inspection.  The second is the cost of inspection itself, which
>obviously increases.  Businesses strived to position themselves at the
>minimum, the "acceptable quality level".
>
>This turns out to be a bad idea.  The root problem is that it's very
>hard to measure the true cost of shipping bad product.  For example,
>how do you measure low-level annoyance with rattling parts --
>something that doesn't result in a warrantee charge, but may mean the
>customer buys another brand next time?  This incomplete data then led
>directly to mistaken strategies.
>
>The modern approach, following Taguchi, Deming, and company, has (in
>principle) abandoned a measurement (cost of quality) and replaced it
>with a system of faith that asserts that increased quality is *always*
>cost-effective.  In this case, the faith has worked better than the
>metric.

This principle can't be useful when carried to an extreme -- a cost/benefit
analysis must be done. Consider for example an automotive part which is
presently machined to a tolerance of 0.01 mm. Perhaps decreasing the
tolerance to 0.005 mm will give a noticeably higher quality product at a
reasonable cost. It does not follow that it is therefore desirable to
obtain more sophisticated [expensive] equipment in order to reduce the
tolerance to 0.0001 mm. There IS a law of diminishing returns, and a 
successful company must have a reasonably good grasp of where these 
break-even points are.

As someone else in the group has pointed out, blindly adding additional
steel and concrete to a bridge is in most cases counterproductive.
I would suggest that the same is true of blindly adding quality.

If you can't measure something, you can't control it or set priorities.
I conclude that effective metrics are essential to improvements in software
quality. HOWEVER ... see below.

>
>I'd guess that a sizable percentage of the anti-metric camp is
>justifiably fearful of the effects of measuring (and, inevitably,
>concentrating on) the inessentials.  Another sizable percentage is
>spoiled rotten.

There is a real problem with the Hiesenberg uncertainty principle 
when using metrics [that is, the act of measuring the system upsets the
system]. As an example, consider simple metrics programs which were
developed a few years back to "grade" the "style" of student programs.

To take one specific aspect, long identifier names were considered to be
better style than short ones. So what happens when the students start
running their programs through the style grader? They find that long 
identifier names improve their scores, and so they increase the number of
characters in the identifier names. They don't improve the semantic content
of the identifier names, just make them longer. The program has hot really
gotten better, it is just scored as being better. And some could argue that
long, meaningless names are worse than short meaningless names.

>
>Disclaimer:  I'm not an industrial engineer.  What I know, I know from
>taking industrial engineering courses, reading, and being an employee
>of a company that's been quite successfully fanatical about quality.
>
>Brian Marick
>Motorola @ University of Illinois

Is Motorola's current position on drug testing an aspect of their
being fanatical? :-) It seems that impairment testing [which they do not do]
would be more effective in improving quality. See the recent discussion
in misc.jobs.misc.



--------------------------------------           ******************************
Orville R. Weyrich, Jr., Ph.D.                   Certified Systems Professional
Internet: orville%weyrich@uunet.uu.net             Weyrich Computer Consulting
Voice:    (602) 391-0821                         POB 5782, Scottsdale, AZ 85261
Fax:      (602) 391-0023                              (Yes! I'm available)
--------------------------------------           ******************************

alan@tivoli.UUCP (Alan R. Weiss) (05/23/91)

In article <1991May16.151913.13770@weyrich.UUCP> orville@weyrich.UUCP (Orville R. Weyrich) writes:
>In article <1991May15.223135.12381@m.cs.uiuc.edu> marick@m.cs.uiuc.edu (Brian Marick) writes:
>>jls@netcom.COM (Jim Showalter) writes:
>>
>>>I find it odd that metrics have been
>>>successfully used in just about all other industries to improve quality,
>>>reduce risk, identify incipient problems, increase productivity, etc, but
>>
>>Let's be careful.  For example, consider industrial engineering, where
>>people used to measure "cost of quality".  The cost of quality was
>>calculated by adding together two curves.  The first curve was the
>>cost of shipping bad product, which decreases with increasing
>>inspection.  The second is the cost of inspection itself, which
>>obviously increases.  Businesses strived to position themselves at the
>>minimum, the "acceptable quality level".
>>
>>This turns out to be a bad idea.  The root problem is that it's very
>>hard to measure the true cost of shipping bad product.  For example,
>>how do you measure low-level annoyance with rattling parts --
>>something that doesn't result in a warrantee charge, but may mean the
>>customer buys another brand next time?  This incomplete data then led
>>directly to mistaken strategies.

Measuring the long-term cost of quality is difficult, and can only be
done *after* the firm has experienced the cost of re-work, warranties,
and lost sales AND they have surveyed their (former) customers to
determine defection causes.  Like I said, difficult.  Some have actually
done it, though, and some have indeed gone on faith (see below):

>>The modern approach, following Taguchi, Deming, and company, has (in
>>principle) abandoned a measurement (cost of quality) and replaced it
>>with a system of faith that asserts that increased quality is *always*
>>cost-effective.  In this case, the faith has worked better than the
>>metric.

This is actually incorrect.  Deming and other Modern Quality Theorists
insist upon Statistical Process Control to measure the Cost of Quality. 
I refer you to Crosby, Gilb, Demings, Juran, et al.


>If you can't measure something, you can't control it or set priorities.
>I conclude that effective metrics are essential to improvements in software
>quality. HOWEVER ... see below.

You can certainly control the most important factor in software
development (according to Tony DeMarco in Peopleware), the
environment/culture, without measurements.  Its not XOR, its AND.
You need the metrics for the development process, cost analysis,
and to pinpoint specific areas of improvement.  But you can
in fact manage quite well without them.  You may not end up with
QUALITY, that's all. :-)

>>I'd guess that a sizable percentage of the anti-metric camp is
>>justifiably fearful of the effects of measuring (and, inevitably,
>>concentrating on) the inessentials.  Another sizable percentage is
>>spoiled rotten.
>
>There is a real problem with the Hiesenberg uncertainty principle 

Yes, this can be true.  That's part of the improvement process:  the mere
knowledge of observation can have an improvement effect, which is more
properly attributed to the Hawthorn Effect (GE, 1923 study in which the
changing of ANY environment variable improved productivity!).

>Is Motorola's current position on drug testing an aspect of their
>being fanatical? :-) It seems that impairment testing [which they do not do]
>would be more effective in improving quality. See the recent discussion
>in misc.jobs.misc.

From what I gather from folks who have left Moto-Austin, it is some kind
of personal crusade or something of selected individuals.  Weird,
and surprisingly unscientific.  

>--------------------------------------           ******************************
>Orville R. Weyrich, Jr., Ph.D.                   Certified Systems Professional
>Internet: orville%weyrich@uunet.uu.net             Weyrich Computer Consulting
>Voice:    (602) 391-0821                         POB 5782, Scottsdale, AZ 85261
>Fax:      (602) 391-0023                              (Yes! I'm available)
>--------------------------------------           ******************************



_______________________________________________________________________
Alan R. Weiss                           TIVOLI Systems, Inc.
E-mail: alan@tivoli.com                 6034 West Courtyard Drive,
E-mail: alan@whitney.tivoli.com	        Suite 210
Voice : (512) 794-9070                  Austin, Texas USA  78730
Fax   : (512) 794-0623
_______________________________________________________________________

marick@m.cs.uiuc.edu (Brian Marick) (05/23/91)

I wrote:

>>>The modern approach, following Taguchi, Deming, and company, has (in
>>>principle) abandoned a measurement (cost of quality) and replaced it
>>>with a system of faith that asserts that increased quality is *always*
>>>cost-effective.  In this case, the faith has worked better than the
>>>metric.

alan@tivoli.UUCP (Alan R. Weiss) writes:

>This is actually incorrect.  Deming and other Modern Quality Theorists
>insist upon Statistical Process Control to measure the Cost of Quality. 
>I refer you to Crosby, Gilb, Demings, Juran, et al.

I think you may have misunderstood my point.  I was talking about
measuring cost of quality, which is different from measuring some
quality attribute.  You of course measure quality attributes; the
system of faith is that should you never stop improving them.  This
differs from the traditional approach where you measure the cost of
improvement and stop when the product is "good enough".  

Here are some references for those who are interested.  The first two
are pretty easy to understand.  The concentration on Taguchi is
accidental; these are what I had photocopied and ready to hand.  You
find the same ideas in Deming (his point 5 is "neverending
improvement"), Crosby (the title of his book, _Quality is Free_), and
as far back as Shewhart, who originated statistical quality control in
his 1931 book.

Thomas Barker, 'Quality Engineering by Design: Taguchi's Philosophy',
Quality Progress, December 1986

Raghu N Kackar, 'Taguchi's Quality Philosophy:  Analysis and
Commentary', Quality Progress, December 1986.

Raghu N. Kackar, 'Off-Line Quality Control, Parameter Design, and the
Taguchi Method',  Journal of Quality, October 1985 (with commentary).

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

alan@tivoli.UUCP (Alan R. Weiss) (05/24/91)

In article <1991May23.131739.11711@m.cs.uiuc.edu> marick@m.cs.uiuc.edu (Brian Marick) writes:
>I wrote:
>
>>>>The modern approach, following Taguchi, Deming, and company, has (in
>>>>principle) abandoned a measurement (cost of quality) and replaced it
>>>>with a system of faith that asserts that increased quality is *always*
>>>>cost-effective.  In this case, the faith has worked better than the
>>>>metric.
>
I wrote back:
>
>>This is actually incorrect.  Deming and other Modern Quality Theorists
>>insist upon Statistical Process Control to measure the Cost of Quality. 
>>I refer you to Crosby, Gilb, Demings, Juran, et al.

To which my friend Brian Marick responded with:

>I think you may have misunderstood my point.  I was talking about
>measuring cost of quality, which is different from measuring some
>quality attribute.

No, I didn't misunderstand :-)  The guru's all say to do both:  try to
measure the cost of quality (in Crosby's eyes, the cost of rework)
AND constantly improve your process (Demings).

>  You of course measure quality attributes; the
>system of faith is that should you never stop improving them.  This
>differs from the traditional approach where you measure the cost of
>improvement and stop when the product is "good enough".  

None of the gurus say you should stop improving quality, only that
such things as product shipment dates finally come into play.

I believe we are saying close to the same things.

_______________________________________________________________________
Alan R. Weiss                           TIVOLI Systems, Inc.
E-mail: alan@tivoli.com                 6034 West Courtyard Drive,
E-mail: alan@whitney.tivoli.com	        Suite 210
Voice : (512) 794-9070                  Austin, Texas USA  78730
Fax   : (512) 794-0623
_______________________________________________________________________