[comp.software-eng] SQA

bwb@sei.cmu.edu (Bruce Benson) (10/19/90)

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

bwb@sei.cmu.edu (Bruce Benson) (10/22/90)

In article <DWIGGINS.90Oct17183946@atsun.a-t.com> dwiggins@atsun.a-t.com 
(Don Dwiggins) writes:
>In article <9077@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>
>   Nevertheless, I still think SQA is a bandage on 
>   a problem and inherently discourages quality ....
>
>Could you expand on this a bit?  What characteristics of SQA do you feel
>cause the problems?  How should it be changed, or what should it be replaced
>by, to improve the situation?

Deming said it well when he suggested that if part of your 
organization is devoted to finding errors than that's exactly what 
they'll do, find errors.  It's their job and they are rewarded for 
"catching" the mistakes.  This part of the organization has a 
strong interest (their jobs, promotions, bonuses) in errors (buggy 
software).

Of the 12 odd books I've looked at on SQA, the following two 
concepts are consistently used as the core basis for SQA:

1. Put your "best" people in SQA to leverage their expertise 
against all projects.

2. Provide dedicated and independent "judgement" so management 
knows "quality" is being built into the product. 

Contrast this with the job of the software development manager. 
His or her job is to produce a quality product (meets 
requirements) on a given cost and schedule.    If all the neat 
things SQA is going to do (reviews, evaluations, etc) will really 
result in quality - why is the manger not doing them?  Often 
claimed is the manager is completely focused on cost/schedule.  

Assume SQA can delay a project for poor quality or poor practices.  
This means the software manger can't meet the target 
cost/schedule.  If SQA can delay a project then top management has 
agreed that quality is more important than cost/schedule.  
Therefore, why is the software manager totally focused on 
cost/schedule?   Habit?   

What I'm suggesting is that if SQA really worked, then it would 
become obsolete almost immediately.  If the manager can 
unswervingly focus on cost/schedule, (s)he can unwaveringly focus 
on quality/cost/schedule.

Let me try to illustrate my argument with an example:

For many reasons we were in a situation where if we produced buggy 
software we would no longer be doing software.  We defined and 
evolved a software process and had some pretty spectacular 
results: less than a handful of errors over three years.

During this period our independent testers showed signs of strain.  
They would write up several pages of "errors" and give these 
directly to top management without discussing them with us.  More 
and more "error" reports were design and usability related.  It 
stretched from days, to weeks, and then months before test would 
get around to testing new batches of enhancements.   Given the 
actual number and severity of real errors identified by testers 
and users, we did not need independent test (to catch errors).

Living through the above example (and a few others) convinced me 
that SQA/test is like the evening news, you get 30 minutes of news 
regardless of worth.  Of course these folks are a victim of the 
situation, not the cause.  

This example illustrates two beliefs 1) software management can do 
quality without an SQA; and 2) "error finding" organizations can 
cause more harm than good.  Given that any organization attempts 
to justify and perpetuate it's own existence, I can't see 
purposely creating an SQA group (as defined by points 1 and 2 
above).

Conclusion:  If you can define and measure quality well enough to 
create an effective SQA, then you can use these definitions and 
measures directly without an SQA.  In an improving organization, 
SQA and independent test will diminish in importance with time. If 
this is not happening, you are not improving.
 
Comments?

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

jcardow@blackbird.afit.af.mil (James E. Cardow) (10/23/90)

bwb@sei.cmu.edu (Bruce Benson) writes:

>In article <DWIGGINS.90Oct17183946@atsun.a-t.com> dwiggins@atsun.a-t.com 
>(Don Dwiggins) writes:
>>In article <9077@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>>
>>   Nevertheless, I still think SQA is a bandage on 
>>   a problem and inherently discourages quality ....
>>
>>Could you expand on this a bit?  What characteristics of SQA do you feel
>>cause the problems?  How should it be changed, or what should it be replaced
>>by, to improve the situation?

>Deming said it well when he suggested that if part of your 
>organization is devoted to finding errors than that's exactly what 
>they'll do, find errors.  It's their job and they are rewarded for 
>"catching" the mistakes.  This part of the organization has a 
>strong interest (their jobs, promotions, bonuses) in errors (buggy 
>software).

>Of the 12 odd books I've looked at on SQA, the following two 
>concepts are consistently used as the core basis for SQA:

>1. Put your "best" people in SQA to leverage their expertise 
>against all projects.

>2. Provide dedicated and independent "judgement" so management 
>knows "quality" is being built into the product. 

>Contrast this with the job of the software development manager. 
>His or her job is to produce a quality product (meets 
>requirements) on a given cost and schedule.    If all the neat 
>things SQA is going to do (reviews, evaluations, etc) will really 
>result in quality - why is the manger not doing them?  Often 
>claimed is the manager is completely focused on cost/schedule.  

> <extensive deletions for brevity>

>This example illustrates two beliefs 1) software management can do 
>quality without an SQA; and 2) "error finding" organizations can 
>cause more harm than good.  Given that any organization attempts 
>to justify and perpetuate it's own existence, I can't see 
>purposely creating an SQA group (as defined by points 1 and 2 
>above).

>Conclusion:  If you can define and measure quality well enough to 
>create an effective SQA, then you can use these definitions and 
>measures directly without an SQA.  In an improving organization, 
>SQA and independent test will diminish in importance with time. If 
>this is not happening, you are not improving.
> 
>Comments?

Good points!  However, the first response has to be "why aren't we
currently developing high quality software?"  This leads into a long
string of interesting through totally rediculous arguments.  Things like
"I'll require quality software (mandate) and it will happen" often become the
results.

Maybe to goal of SQA should not be finding errors, but to terminate it's own
existance.  Now this requires some real guts on the parts of the managers
and the poor soul taking on the job, but is the opportunity worth the risk?

Let's assume a few things for the sake of the argument:

1)  We (the software industry) are not currently producing sufficient quality
software, or consistently producing quality software.  (i.e. we have a 
problem).

2)  We are not purely facing a resource based constraint (pumping money into
software development will not instantly solve the problem).

3)  Project management is focused on delivery (cost and schedule), as it should
be.

4)  Because of 3, project management cannot focus on the details of improving 
quality (technology innovation?).

Now, how do we solve 1, within the bounds of 2-4?

SQA as Bruce has pointed out is focused on self preservation.  How about this:

1.  Create an SQA organization with the following requirements:
	a.  Sunset clause as part of the charter.
	b.  Assign a "rising star" as the manager.
	c.  Base evaluations on progress at achieving (a).

2.  Define quality  in terms of "user's view" not testing results.
	a.  Delivery of product within time and cost is a plus.
	b.  Negative value of latent defects increase with time, user 
		identification (after delivery) being worse case.
	c.  User requests for improvements beyond original spec counting
		as highly positive (indication of acceptance of product).


Conclusion:  I don't think that we can define quality well enough to 
require managers to "do it".  I do think that by establishing an SQA
organization we can cause the definition to be created, but only with 
talent and reward as the carrot.  

This are truely random thoughts, based on trying to define SQA value for
software engineering course and thinking about the same issues that Bruce
raised.  I don't have any answers, just plenty of questions.  I do welcome
any comments and opinions.

Jim Cardow, Capt, USAF
Instructor of Software Engineering
Air Force Institute of Technology

jcardow@blackbird.afit.af.mil

bks@alfa.berkeley.edu (Brad Sherman) (10/23/90)

In article <9150@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>In article <DWIGGINS.90Oct17183946@atsun.a-t.com> dwiggins@atsun.a-t.com 
>(Don Dwiggins) writes:
>>In article <9077@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>
>Conclusion:  If you can define and measure quality well enough to 
>create an effective SQA, then you can use these definitions and 
>measures directly without an SQA.  In an improving organization, 
>SQA and independent test will diminish in importance with time. If 
>this is not happening, you are not improving.
> 
>Comments?

This reminds me of the office automation paradox:
Once you get the office personnel and paper-flow organized enough to
use your software, there is no additional benefit to be gained by
actually using the software.

------------------
	Brad Sherman (bks@alfa.berkeley.edu)

dalamb@qucis.queensu.CA (David Lamb) (10/25/90)

In article <9150@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>Conclusion:  If you can define and measure quality well enough to
>create an effective SQA, then you can use these definitions and
>measures directly without an SQA.  In an improving organization,
>SQA and independent test will diminish in importance with time. If
>this is not happening, you are not improving.
>
There's an inherent tension between getting a product out by a deadline
(the bias of developers) and getting out a product with no defects (the
bias of SQA groups).  You want a separate SQA groups not so much
because they specialize in SQA as because they aren't the developers,
so don't have the developer's biases and mindset about the product.
There's a right way and a wrong way to introduce an SQA group:

Wrong:
			      -----------
			     | president |
			      -----------
	  ----------------                      --------
	 | VP Development |                    | VP SQA |
	  ----------------                      -------- 
	  /          \       ...                   /     \
  ------------    ------------          -----------    -----------
 | Project 1  |  | Project 2  | ...    | Project 1 |  | Project 2 | ...
 | Developers |  | Developers | ...    |   SQA     |  |   SQA     |
  ------------    ------------          -----------    -----------
There's no one to arbitrate the conflict between the goals of the developers
and the SQA.


Right:
                              -----------
                             | president |
                              -----------
			      .... (whatever other levels of company)
          -----------                          -----------
         | Project 1 |                        | Project 2 |
	 |  Manager  |                        |  Manager  |
          -----------                          -----------
         /        \                             /     \
 ------------    ------------         ------------     -----------
| Project 1  |  | Project 1 |        | Project 2  |   | Project 2 |    
| Developers |  |    SQA    |        | Developers |   |   SQA     |
 ------------    ------------         ------------     -----------

The manager's mission is to get out a high-quality product by a deadline, so
is the right one to balance "meet the deadline" against "don't let through
any defects."

David Alex Lamb			ARPA Internet:	David.Lamb@cs.cmu.edu
Department of Computing				dalamb@qucis.queensu.ca
    and Information Science	uucp:   	...!utzoo!utcsri!qucis!dalamb
Queen's University		phone:		(613) 545-6067
Kingston, Ontario, Canada K7L 3N6	

murphyn@motcid.UUCP (Neal P. Murphy) (10/25/90)

bwb@sei.cmu.edu (Bruce Benson) writes:

>...
>things SQA is going to do (reviews, evaluations, etc) will really 
>result in quality - why is the manger not doing them?  Often 
>claimed is the manager is completely focused on cost/schedule.  

This is often true. Also true is that the manager is focused on his
people and their work - keeping them from getting stuck, overwhelmed,
etc. He is dealing with people, who can have inexplicable reactions
to most anything. It is extremely helpful to have a seperate group
handling the paperwork.

>Assume SQA can delay a project for poor quality or poor practices.  
>This means the software manger can't meet the target 
>cost/schedule.  If SQA can delay a project then top management has 
>agreed that quality is more important than cost/schedule.  
>Therefore, why is the software manager totally focused on 
>cost/schedule?   Habit?   

Perhaps. Perhaps upper management still pushes cost/schedule, while
not realizing/accepting that software design/development can't be
scheduled.

>What I'm suggesting is that if SQA really worked, then it would 
>become obsolete almost immediately.  If the manager can 
>unswervingly focus on cost/schedule, (s)he can unwaveringly focus 
>on quality/cost/schedule.

A large SQA organization would be obsolete. A smaller one would still
be required to handle the paperwork, handle oversight (making sure
the manager and his people are following the development process
correctly.)

>Let me try to illustrate my argument with an example:

>For many reasons we were in a situation where if we produced buggy 
>software we would no longer be doing software.  We defined and 
>evolved a software process and had some pretty spectacular 
>results: less than a handful of errors over three years.
>...
>get around to testing new batches of enhancements.   Given the 
>actual number and severity of real errors identified by testers 
>and users, we did not need independent test (to catch errors).

Independent testers serve a very useful purpose. The developers/designers
are too close to the work to spot errors and problems. Too often they
automatically use an easy workaround to bypass the problem. Independent
testers catch all problems. However, if you have a fair number
of customers using your product, they become your independent testers;
your official tester can be freed up to work on some other project.

>...
>This example illustrates two beliefs 1) software management can do 
>quality without an SQA; and 2) "error finding" organizations can 
>cause more harm than good.  Given that any organization attempts 
>to justify and perpetuate it's own existence, I can't see 
>purposely creating an SQA group (as defined by points 1 and 2 
>above).

>Conclusion:  If you can define and measure quality well enough to 
>create an effective SQA, then you can use these definitions and 
>measures directly without an SQA.  In an improving organization, 
>SQA and independent test will diminish in importance with time. If 
>this is not happening, you are not improving.
> 
>Comments?

I agree. But the operative word is `can'. The ability to do something
and actually doing it are usually miles apart. Often, software development
groups do not employ rigid-enough SQA-like processes, thereby leaving
an opening for an official SQA group.

What you've said here iseffectively true, but not often seen in reality.
Perhaps, with more people like you in industry, we can change reality.

NPN

donm@margot.Eng.Sun.COM (Don Miller) (10/26/90)

In article <1711@blackbird.afit.af.mil> jcardow@blackbird.afit.af.mil (James E. Cardow) writes:
>bwb@sei.cmu.edu (Bruce Benson) writes:
>
>>Deming said it well when he suggested that if part of your 
>>organization is devoted to finding errors than that's exactly what 
>>they'll do, find errors.  It's their job and they are rewarded for 
>>"catching" the mistakes.  This part of the organization has a 
>>strong interest (their jobs, promotions, bonuses) in errors (buggy 
>>software).
>

   Yes, Deming does focus on error prevention, rather than error detection,
   as a goal.  Thus, the focus of the SQA group would be, in his eyes, the 
   detection of product defects towards the correction of process defects.
   Deming would approve of SQA to the extent that the company climate allows 
   the SQA group to suggest process changes which can be implemented as a 
   result of product deficiencies.

   Unfortunately, as is almost universally discussed, the reputation of most
   SQA groups is not conducive to recommending process changes for developers.
   Hence, ...

>
>>1. Put your "best" people in SQA to leverage their expertise 
>>against all projects.
>
>>2. Provide dedicated and independent "judgement" so management 
>>knows "quality" is being built into the product. 
>

>>Contrast this with the job of the software development manager. 
>>His or her job is to produce a quality product (meets 
>>requirements) on a given cost and schedule.    If all the neat 
>>things SQA is going to do (reviews, evaluations, etc) will really 
>>result in quality - why is the manger not doing them?  Often 
>>claimed is the manager is completely focused on cost/schedule.  

   I agree that it is the job of the development manager in most 
   organizations to ensure that the tradeoffs implicit in product 
   development between quality, cost, schedule, resources, and content are 
   made intelligently.  However, I don't believe that the things
   which result in quality are "things SQA is going to do".  Only
   if the development teams themselves adopt the process improvement
   recommendations will an increase in quality be possible.

>
>>Conclusion:  If you can define and measure quality well enough to 
>>create an effective SQA, then you can use these definitions and 
>>measures directly without an SQA.  In an improving organization, 
>>SQA and independent test will diminish in importance with time. If 
>>this is not happening, you are not improving.
>> 

   Again, quality improvements are only made possible by the creators
   of the product.  The existence of an SQA organization assures that
   there are individuals who are focussed on quality and processes
   rather than products.  The demands on development managers provide 
   the impetus for an independent focus.

>
>Let's assume a few things for the sake of the argument:

   Pretty reasonable assumptions, I'd say.
>
>1)  We (the software industry) are not currently producing sufficient quality
>software, or consistently producing quality software.  (i.e. we have a 
>problem).
>
>2)  We are not purely facing a resource based constraint (pumping money into
>software development will not instantly solve the problem).

   I've already commented that costs, content, and schedule are factors.
   The final most important factor is what I call developmental efficiency.
   While it is derived from the others, it is important to address.
   It is the quality-content produced per cost-resource-time.  This
   seems to be the relevant figure software development organizations
   want maximized.  Certainly, some will focus on different parts of the
   equation, but process change is the surest way to impact them all.

>
>3)  Project management is focused on delivery (cost and schedule), as it should
>be.
>
>4)  Because of 3, project management cannot focus on the details of improving 
>quality (technology innovation?).

   Hence the need for an independent, quality process focused organization.

>
>Now, how do we solve 1, within the bounds of 2-4?
>
>SQA as Bruce has pointed out is focused on self preservation.  How about this:
>
>1.  Create an SQA organization with the following requirements:
>	a.  Sunset clause as part of the charter.
>	b.  Assign a "rising star" as the manager.
>	c.  Base evaluations on progress at achieving (a).
>

   What's a "sunset clause"?  Why should an SQA organization achieve it?

>2.  Define quality  in terms of "user's view" not testing results.
>	a.  Delivery of product within time and cost is a plus.
>	b.  Negative value of latent defects increase with time, user 
>		identification (after delivery) being worse case.
>	c.  User requests for improvements beyond original spec counting
>		as highly positive (indication of acceptance of product).
>

   Exactly, quality is the degree to which you meet customer's requirements.
   These things all address that view.

>
>Conclusion:  I don't think that we can define quality well enough to 
>require managers to "do it".  I do think that by establishing an SQA
>organization we can cause the definition to be created, but only with 
>talent and reward as the carrot.  
>

   Since I believe in the requirements centered approach to software
   development, I agree that the existence of an SQA organization
   can help to the degree that it ensures that the product created
   and the processes used truly meet the requirements of the market.
   On the other hand, if those who focus on such things are not able
   to express their findings in an acceptable way, development managers
   will continue "doing it" just like they always have.  WHat choice
   do they have?

   Well, I'm at least one person who thinks that quality products
   will become a strategic competitive advantage in the 90's.  What
   do you think?

--
Don Miller                              |   #include <std.disclaimer>
Software Quality Engineering            |   #define flame_retardent \
Sun Microsystems, Inc.                  |   "I know you are but what am I?"
donm@eng.sun.com                        |   

Mark_Richter@TRANSARC.COM (10/27/90)

I'm always intrigued by the "SQA or not" debate. The last post I
read made a point about "sunset clauses" on SQA, and also raised
the old "quality versus schedule" issue.

A couple of short comments:

First, as a development manager I've never understood the old
quality/schedule tradeoff argument. If you are going to be
successful then these are not interchangeable things. Sure, I can
race the schedule, quality be damned, and after my product
fails in the marketplace, not be around for release 2. So what
have I accomplished?

Second, I want to also quibble with the sunset clause issue. But
first let me define SQA in my own terms: SQA is an entity that
specializes in good techniques for producing high quality work
in my organization. This means that SQA is not a testing entity.
Just as I need gifted people who understand the technology of
the software product we are trying to build, I also need gifted
people who understand good ways to put everything together along
the way. Under these circumstances that last thing I want is
a sunset clause on SQA.

Finally, the post that showed SQA as part of the project team is
correct. A crucial factor in the success of any development effort
is accountability. It's awfully hard to have accountability with
parallel chains of command over the same project. I've generally found
the separate-corporate-entity approach to lead to a lot of finger
pointing, and not necessarily higher quality products.

-- mark richter
markr@transarc.com
markr%transarc.com@uunet.uu.net

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (10/30/90)

^[
The problem is that most people confuse "Quality Assurance" with finding
errors/defects.  This improperly equates QA with Testing.  Testing is only
a subset of the functions that QA should perform.

QA should be as concerned with *preventing* defects as with finding them.
QA should be involved with the entire SDLC process, not just testing the
final product.  QA should be involved with improving the entire process,
e.g. setting standards, conducting in-progress reviews, performing
post-mortems, etc.


Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

jeremy@epochsys.UUCP (Jeremy L. Mordkoff) (11/02/90)

In article <976@qusunitg.queensu.CA> dalamb@qucis.queensu.CA (David Lamb) writes:
>There's an inherent tension between getting a product out by a deadline
>(the bias of developers) and getting out a product with no defects (the
>bias of SQA groups).  You want a separate SQA groups not so much
>because they specialize in SQA as because they aren't the developers,
>so don't have the developer's biases and mindset about the product.
>There's a right way and a wrong way to introduce an SQA group:
>
right=SQA and dev report to a single project manager
wrong=SQA is a seperate department and the two report to
	a director or VP of (SW) engineering

This 'right' scenario is a good structure for a testing group. It is
_not_ appropriate for an SQA group. SQA needs an advocate on the same
peer-level as at least the group that must support this project after
FCS. This is usually one level above the project manager and sometimes
two. This is required because of differences in opinion as to the
meaning of quality (bugs vs.  usability for instance).

An effective SQA effort needs upper-management backing, so it is better
to insert it as high as possible in the structure. You must pool the
SQA talent, rather than disperse it among the projects. SQA's seemingly
unattainable goal is to making testing obsolete. Testing is unique to
each project; the goals and skills of SQA are common to all.

This model also implies that the groups are of the same size scale. The
best I have seen is three to one.  The worst I have seen is twenty to one.
The 'wrong' model has the advantage of pooling the SQA talent.

The model I advocate is a hybrid, and it works well. I report to the
director of engineering, but I get my day to day direction from the
project leader. In all the ways that matter to testing, our group
follows the 'right' model, but it is to the director of engineering
that I give the final go-ahead to. Just knowing this, the project
manager will always get my opinion first, putting me on par with the
developers as a group. This also allows me to work on several projects
as once, and that frees me to get involved from day one. I feel that by
pushing for better specifications we can build better products, so I
take the role of spec critic very seriously.

My next goal is to make customer support part of SQA (or vice versa) so
that the people who must support the product have more say in when it
is released.
-- 
Jeremy L. Morkdoff		uunet!epochsys!jeremy
Epoch Systems, Inc.		{harvard!cfisun,linus!alliant}!palladium!jeremy
8 Technology Drive 		(508)836-4711 x346 (voice)
Westboro, Ma. 01581		(508)836-4884 (fax)	

rh@smds.UUCP (Richard Harter) (11/03/90)

One thing I have come to appreciate is the value of automated testing.
In this scenario the test group is responsible for the control of and
the development of the test software -- they don't run the tests.  The
developers don't explicitly run the tests either.  Testing is automatically
triggered by checkin (or equivalent) scripts with checkin acceptance
being conditional on passing the test sequences.  And, of course,
testing is a service that can be requested at any time by a developer.

An SPR (software problem report) represents a failure in the testing
procedures -- it is the task of the test group to modify the testing
procedures to test for the problem.

An MR (modification request) is a dual task; the development group
does the modification and the test group develops the revised test
procedures.

One way to look at things is that testing is a service for development;
development is greatly simplified if comprehensive testing is unobtrusively
available.
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

cox@stpstn.UUCP (Brad Cox) (11/04/90)

In article <884@epochsys.UUCP> jeremy@epochsys.UUCP (Jeremy L. Mordkoff) writes:
>In article <976@qusunitg.queensu.CA> dalamb@qucis.queensu.CA (David Lamb) writes:
>>There's an inherent tension between getting a product out by a deadline
>>(the bias of developers) and getting out a product with no defects (the
>>bias of SQA groups).

I used to agree that this tension was inherent, but I'm no longer so sure.

I'm sure the cottage industry gunsmiths of 200 years ago would have argued
that there was an 'inherent' tension between the greater productivity of
cut-to-fit hand craftsmanship and the user's desire for high-precision
interchangeable parts. They were blindsided by the ultimate realization
that high-precision interchangeable parts was the *key* to greater
productivity. This realization was a paradigm shift that not only eliminated
the gunsmiths as a group, but promoted the U.S. to dominance over England.

The Japanese played and won this same game some years later by realizing 
that higher quality was the key to higher productivity throughout 
manufacturing. Again a counterintuitive paradigm shift, that higher
quality means higher productivity; again leveraged into major economic
impacts.

But of course, this was for manufacturing. Everybody knows software
is different. This could *never* happen to *us*...;-)
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482